Serveur sur les données et bibliothèques médicales au Maghreb (version finale)

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 0002699 ( Pmc/Corpus ); précédent : 0002698; suivant : 0002700 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Convolutional neural networks approach for multimodal biometric identification system using the fusion of fingerprint, finger-vein and face images</title>
<author>
<name sortKey="Cherrat, El Mehdi" sort="Cherrat, El Mehdi" uniqKey="Cherrat E" first="El Mehdi" last="Cherrat">El Mehdi Cherrat</name>
<affiliation>
<nlm:aff id="aff-1">
<institution>Laboratory of Systems Engineering and Information Technology, National School of Applied Sciences, Ibn Zohr University</institution>
,
<addr-line>Agadir</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Alaoui, Rachid" sort="Alaoui, Rachid" uniqKey="Alaoui R" first="Rachid" last="Alaoui">Rachid Alaoui</name>
<affiliation>
<nlm:aff id="aff-2">
<institution>Laboratory of Computer Science and Telecommunications Research, Faculty of Sciences, Mohammed V University</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff-3">
<institution>Multimedia, Signal and Communications Systems Team, National Institute of Posts and Telecommunication</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bouzahir, Hassane" sort="Bouzahir, Hassane" uniqKey="Bouzahir H" first="Hassane" last="Bouzahir">Hassane Bouzahir</name>
<affiliation>
<nlm:aff id="aff-1">
<institution>Laboratory of Systems Engineering and Information Technology, National School of Applied Sciences, Ibn Zohr University</institution>
,
<addr-line>Agadir</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">33816900</idno>
<idno type="pmc">7924518</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7924518</idno>
<idno type="RBID">PMC:7924518</idno>
<idno type="doi">10.7717/peerj-cs.248</idno>
<date when="2020">2020</date>
<idno type="wicri:Area/Pmc/Corpus">000269</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000269</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Convolutional neural networks approach for multimodal biometric identification system using the fusion of fingerprint, finger-vein and face images</title>
<author>
<name sortKey="Cherrat, El Mehdi" sort="Cherrat, El Mehdi" uniqKey="Cherrat E" first="El Mehdi" last="Cherrat">El Mehdi Cherrat</name>
<affiliation>
<nlm:aff id="aff-1">
<institution>Laboratory of Systems Engineering and Information Technology, National School of Applied Sciences, Ibn Zohr University</institution>
,
<addr-line>Agadir</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Alaoui, Rachid" sort="Alaoui, Rachid" uniqKey="Alaoui R" first="Rachid" last="Alaoui">Rachid Alaoui</name>
<affiliation>
<nlm:aff id="aff-2">
<institution>Laboratory of Computer Science and Telecommunications Research, Faculty of Sciences, Mohammed V University</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff-3">
<institution>Multimedia, Signal and Communications Systems Team, National Institute of Posts and Telecommunication</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bouzahir, Hassane" sort="Bouzahir, Hassane" uniqKey="Bouzahir H" first="Hassane" last="Bouzahir">Hassane Bouzahir</name>
<affiliation>
<nlm:aff id="aff-1">
<institution>Laboratory of Systems Engineering and Information Technology, National School of Applied Sciences, Ibn Zohr University</institution>
,
<addr-line>Agadir</addr-line>
,
<country>Morocco</country>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PeerJ Computer Science</title>
<idno type="eISSN">2376-5992</idno>
<imprint>
<date when="2020">2020</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In recent years, the need for security of personal data is becoming progressively important. In this regard, the identification system based on fusion of multibiometric is most recommended for significantly improving and achieving the high performance accuracy. The main purpose of this paper is to propose a hybrid system of combining the effect of tree efficient models: Convolutional neural network (CNN), Softmax and Random forest (RF) classifier based on multi-biometric fingerprint, finger-vein and face identification system. In conventional fingerprint system, image pre-processed is applied to separate the foreground and background region based on
<italic>K</italic>
-means and DBSCAN algorithm. Furthermore, the features are extracted using CNNs and dropout approach, after that, the Softmax performs as a recognizer. In conventional fingervein system, the region of interest image contrast enhancement using exposure fusion framework is input into the CNNs model. Moreover, the RF classifier is proposed for classification. In conventional face system, the CNNs architecture and Softmax are required to generate face feature vectors and classify personal recognition. The score provided by these systems is combined for improving Human identification. The proposed algorithm is evaluated on publicly available SDUMLA-HMT real multimodal biometric database using a GPU based implementation. Experimental results on the datasets has shown significant capability for identification biometric system. The proposed work can offer an accurate and efficient matching compared with other system based on unimodal, bimodal, multimodal characteristics.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abdullah Al Wadud, M" uniqKey="Abdullah Al Wadud M">M Abdullah-Al-Wadud</name>
</author>
<author>
<name sortKey="Kabir, Mh" uniqKey="Kabir M">MH Kabir</name>
</author>
<author>
<name sortKey="Dewan, Maa" uniqKey="Dewan M">MAA Dewan</name>
</author>
<author>
<name sortKey="Chae, O" uniqKey="Chae O">O Chae</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bhanu, B" uniqKey="Bhanu B">B Bhanu</name>
</author>
<author>
<name sortKey="Kumar, A" uniqKey="Kumar A">A Kumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Borra, Sr" uniqKey="Borra S">SR Borra</name>
</author>
<author>
<name sortKey="Reddy, Gj" uniqKey="Reddy G">GJ Reddy</name>
</author>
<author>
<name sortKey="Reddy, Es" uniqKey="Reddy E">ES Reddy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Breiman, L" uniqKey="Breiman L">L Breiman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Canny, J" uniqKey="Canny J">J Canny</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cherrat, Em" uniqKey="Cherrat E">EM Cherrat</name>
</author>
<author>
<name sortKey="Alaoui, R" uniqKey="Alaoui R">R Alaoui</name>
</author>
<author>
<name sortKey="Bouzahir, H" uniqKey="Bouzahir H">H Bouzahir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cherrat, Em" uniqKey="Cherrat E">EM Cherrat</name>
</author>
<author>
<name sortKey="Alaoui, R" uniqKey="Alaoui R">R Alaoui</name>
</author>
<author>
<name sortKey="Bouzahir, H" uniqKey="Bouzahir H">H Bouzahir</name>
</author>
<author>
<name sortKey="Jenkal, W" uniqKey="Jenkal W">W Jenkal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cortes, C" uniqKey="Cortes C">C Cortes</name>
</author>
<author>
<name sortKey="Vapnik, V" uniqKey="Vapnik V">V Vapnik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hosmer, Dw" uniqKey="Hosmer D">DW Hosmer</name>
</author>
<author>
<name sortKey="Lemeshow, S" uniqKey="Lemeshow S">S Lemeshow</name>
</author>
<author>
<name sortKey="Sturdivant, Rx" uniqKey="Sturdivant R">RX Sturdivant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, Z" uniqKey="Huang Z">Z Huang</name>
</author>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y Liu</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X Li</name>
</author>
<author>
<name sortKey="Li, J" uniqKey="Li J">J Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Itqan, Ks" uniqKey="Itqan K">KS Itqan</name>
</author>
<author>
<name sortKey="Syafeeza, Ar" uniqKey="Syafeeza A">AR Syafeeza</name>
</author>
<author>
<name sortKey="Gong, Fg" uniqKey="Gong F">FG Gong</name>
</author>
<author>
<name sortKey="Mustafa, N" uniqKey="Mustafa N">N Mustafa</name>
</author>
<author>
<name sortKey="Wong, Yc" uniqKey="Wong Y">YC Wong</name>
</author>
<author>
<name sortKey="Ibrahim, Mm" uniqKey="Ibrahim M">MM Ibrahim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jain, Ak" uniqKey="Jain A">AK Jain</name>
</author>
<author>
<name sortKey="Hong, L" uniqKey="Hong L">L Hong</name>
</author>
<author>
<name sortKey="Kulkarni, Y" uniqKey="Kulkarni Y">Y Kulkarni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jain, A" uniqKey="Jain A">A Jain</name>
</author>
<author>
<name sortKey="Nandakumar, K" uniqKey="Nandakumar K">K Nandakumar</name>
</author>
<author>
<name sortKey="Ross, A" uniqKey="Ross A">A Ross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kang, W" uniqKey="Kang W">W Kang</name>
</author>
<author>
<name sortKey="Lu, Y" uniqKey="Lu Y">Y Lu</name>
</author>
<author>
<name sortKey="Li, D" uniqKey="Li D">D Li</name>
</author>
<author>
<name sortKey="Jia, W" uniqKey="Jia W">W Jia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krizhevsky, A" uniqKey="Krizhevsky A">A Krizhevsky</name>
</author>
<author>
<name sortKey="Sutskever, I" uniqKey="Sutskever I">I Sutskever</name>
</author>
<author>
<name sortKey="Hinton, Ge" uniqKey="Hinton G">GE Hinton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, H" uniqKey="Ma H">H Ma</name>
</author>
<author>
<name sortKey="Popoola, Op" uniqKey="Popoola O">OP Popoola</name>
</author>
<author>
<name sortKey="Sun, S" uniqKey="Sun S">S Sun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mane, S" uniqKey="Mane S">S Mane</name>
</author>
<author>
<name sortKey="Shah, G" uniqKey="Shah G">G Shah</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Park, E" uniqKey="Park E">E Park</name>
</author>
<author>
<name sortKey="Kim, W" uniqKey="Kim W">W Kim</name>
</author>
<author>
<name sortKey="Li, Q" uniqKey="Li Q">Q Li</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J Kim</name>
</author>
<author>
<name sortKey="Kim, H" uniqKey="Kim H">H Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rajesh, S" uniqKey="Rajesh S">S Rajesh</name>
</author>
<author>
<name sortKey="Selvarajan, S" uniqKey="Selvarajan S">S Selvarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reza, Am" uniqKey="Reza A">AM Reza</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, Aa" uniqKey="Ross A">AA Ross</name>
</author>
<author>
<name sortKey="Govindarajan, R" uniqKey="Govindarajan R">R Govindarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, A" uniqKey="Ross A">A Ross</name>
</author>
<author>
<name sortKey="Jain, A" uniqKey="Jain A">A Jain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Singh, M" uniqKey="Singh M">M Singh</name>
</author>
<author>
<name sortKey="Singh, R" uniqKey="Singh R">R Singh</name>
</author>
<author>
<name sortKey="Ross, A" uniqKey="Ross A">A Ross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soleymani, S" uniqKey="Soleymani S">S Soleymani</name>
</author>
<author>
<name sortKey="Dabouei, A" uniqKey="Dabouei A">A Dabouei</name>
</author>
<author>
<name sortKey="Kazemi, H" uniqKey="Kazemi H">H Kazemi</name>
</author>
<author>
<name sortKey="Dawson, J" uniqKey="Dawson J">J Dawson</name>
</author>
<author>
<name sortKey="Nasrabadi, Nm" uniqKey="Nasrabadi N">NM Nasrabadi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Son, B" uniqKey="Son B">B Son</name>
</author>
<author>
<name sortKey="Lee, Y" uniqKey="Lee Y">Y Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Srivastava, N" uniqKey="Srivastava N">N Srivastava</name>
</author>
<author>
<name sortKey="Hinton, G" uniqKey="Hinton G">G Hinton</name>
</author>
<author>
<name sortKey="Krizhevsky, A" uniqKey="Krizhevsky A">A Krizhevsky</name>
</author>
<author>
<name sortKey="Sutskever, I" uniqKey="Sutskever I">I Sutskever</name>
</author>
<author>
<name sortKey="Salakhutdinov, R" uniqKey="Salakhutdinov R">R Salakhutdinov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tome, P" uniqKey="Tome P">P Tome</name>
</author>
<author>
<name sortKey="Vanoni, M" uniqKey="Vanoni M">M Vanoni</name>
</author>
<author>
<name sortKey="Marcel, S" uniqKey="Marcel S">S Marcel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Unar, Ja" uniqKey="Unar J">JA Unar</name>
</author>
<author>
<name sortKey="Seng, Wc" uniqKey="Seng W">WC Seng</name>
</author>
<author>
<name sortKey="Abbasi, A" uniqKey="Abbasi A">A Abbasi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vishi, K" uniqKey="Vishi K">K Vishi</name>
</author>
<author>
<name sortKey="Mavroeidis, V" uniqKey="Mavroeidis V">V Mavroeidis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walia, Gs" uniqKey="Walia G">GS Walia</name>
</author>
<author>
<name sortKey="Rishi, S" uniqKey="Rishi S">S Rishi</name>
</author>
<author>
<name sortKey="Asthana, R" uniqKey="Asthana R">R Asthana</name>
</author>
<author>
<name sortKey="Kumar, A" uniqKey="Kumar A">A Kumar</name>
</author>
<author>
<name sortKey="Gupta, A" uniqKey="Gupta A">A Gupta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, W" uniqKey="Yang W">W Yang</name>
</author>
<author>
<name sortKey="Wang, S" uniqKey="Wang S">S Wang</name>
</author>
<author>
<name sortKey="Hu, J" uniqKey="Hu J">J Hu</name>
</author>
<author>
<name sortKey="Zheng, G" uniqKey="Zheng G">G Zheng</name>
</author>
<author>
<name sortKey="Valli, C" uniqKey="Valli C">C Valli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yang, J" uniqKey="Yang J">J Yang</name>
</author>
<author>
<name sortKey="Zhang, X" uniqKey="Zhang X">X Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yin, Y" uniqKey="Yin Y">Y Yin</name>
</author>
<author>
<name sortKey="Liu, L" uniqKey="Liu L">L Liu</name>
</author>
<author>
<name sortKey="Sun, X" uniqKey="Sun X">X Sun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ying, Z" uniqKey="Ying Z">Z Ying</name>
</author>
<author>
<name sortKey="Li, G" uniqKey="Li G">G Li</name>
</author>
<author>
<name sortKey="Ren, Y" uniqKey="Ren Y">Y Ren</name>
</author>
<author>
<name sortKey="Wang, R" uniqKey="Wang R">R Wang</name>
</author>
<author>
<name sortKey="Wang, W" uniqKey="Wang W">W Wang</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PeerJ Comput Sci</journal-id>
<journal-id journal-id-type="iso-abbrev">PeerJ Comput Sci</journal-id>
<journal-id journal-id-type="pmc">peerj-cs</journal-id>
<journal-id journal-id-type="publisher-id">peerj-cs</journal-id>
<journal-title-group>
<journal-title>PeerJ Computer Science</journal-title>
</journal-title-group>
<issn pub-type="epub">2376-5992</issn>
<publisher>
<publisher-name>PeerJ Inc.</publisher-name>
<publisher-loc>San Diego, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">33816900</article-id>
<article-id pub-id-type="pmc">7924518</article-id>
<article-id pub-id-type="publisher-id">cs-248</article-id>
<article-id pub-id-type="doi">10.7717/peerj-cs.248</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artificial Intelligence</subject>
</subj-group>
<subj-group subj-group-type="heading">
<subject>Computer Vision</subject>
</subj-group>
<subj-group subj-group-type="heading">
<subject>Security and Privacy</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Convolutional neural networks approach for multimodal biometric identification system using the fusion of fingerprint, finger-vein and face images</article-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid" authenticated="false">http://orcid.org/0000-0002-8866-0542</contrib-id>
<name>
<surname>Cherrat</surname>
<given-names>El mehdi</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<email>elmehdi.cherrat@edu.uiz.ac.ma</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name>
<surname>Alaoui</surname>
<given-names>Rachid</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
<contrib id="author-3" contrib-type="author">
<name>
<surname>Bouzahir</surname>
<given-names>Hassane</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<aff id="aff-1">
<label>1</label>
<institution>Laboratory of Systems Engineering and Information Technology, National School of Applied Sciences, Ibn Zohr University</institution>
,
<addr-line>Agadir</addr-line>
,
<country>Morocco</country>
</aff>
<aff id="aff-2">
<label>2</label>
<institution>Laboratory of Computer Science and Telecommunications Research, Faculty of Sciences, Mohammed V University</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</aff>
<aff id="aff-3">
<label>3</label>
<institution>Multimedia, Signal and Communications Systems Team, National Institute of Posts and Telecommunication</institution>
,
<addr-line>Rabat</addr-line>
,
<country>Morocco</country>
</aff>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Laskey</surname>
<given-names>Kathryn</given-names>
</name>
</contrib>
</contrib-group>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2020-01-06">
<day>6</day>
<month>1</month>
<year iso-8601-date="2020">2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>6</volume>
<elocation-id>e248</elocation-id>
<history>
<date date-type="received" iso-8601-date="2019-05-21">
<day>21</day>
<month>5</month>
<year iso-8601-date="2019">2019</year>
</date>
<date date-type="accepted" iso-8601-date="2019-12-02">
<day>2</day>
<month>12</month>
<year iso-8601-date="2019">2019</year>
</date>
</history>
<permissions>
<copyright-statement>© 2020 Cherrat et al.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Cherrat et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://peerj.com/articles/cs-248"></self-uri>
<abstract>
<p>In recent years, the need for security of personal data is becoming progressively important. In this regard, the identification system based on fusion of multibiometric is most recommended for significantly improving and achieving the high performance accuracy. The main purpose of this paper is to propose a hybrid system of combining the effect of tree efficient models: Convolutional neural network (CNN), Softmax and Random forest (RF) classifier based on multi-biometric fingerprint, finger-vein and face identification system. In conventional fingerprint system, image pre-processed is applied to separate the foreground and background region based on
<italic>K</italic>
-means and DBSCAN algorithm. Furthermore, the features are extracted using CNNs and dropout approach, after that, the Softmax performs as a recognizer. In conventional fingervein system, the region of interest image contrast enhancement using exposure fusion framework is input into the CNNs model. Moreover, the RF classifier is proposed for classification. In conventional face system, the CNNs architecture and Softmax are required to generate face feature vectors and classify personal recognition. The score provided by these systems is combined for improving Human identification. The proposed algorithm is evaluated on publicly available SDUMLA-HMT real multimodal biometric database using a GPU based implementation. Experimental results on the datasets has shown significant capability for identification biometric system. The proposed work can offer an accurate and efficient matching compared with other system based on unimodal, bimodal, multimodal characteristics.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>CNN</kwd>
<kwd>Multimodal biometrics</kwd>
<kwd>Fingerprint recognition</kwd>
<kwd>Finger-vein recognition</kwd>
<kwd>Face recognition</kwd>
<kwd>Fusion</kwd>
<kwd>Random forest</kwd>
</kwd-group>
<funding-group>
<funding-statement>The authors received no funding for this work.</funding-statement>
</funding-group>
</article-meta>
</front>
<body>
<sec sec-type="intro">
<title>Introduction</title>
<p>Biometric authentication system is basically a pattern-recognition system that identifies a human using a feature vector involved in a particular measurable morphological or behavioral characteristic the individual acquires. The biometrics modalities are often unique, measurable or automatically validated or permanent (
<xref rid="ref-7" ref-type="bibr">Cherrat et al., 2017</xref>
).</p>
<p>Fingerprint have become an essential biometric trait due to its uniqueness and invariant to every individual. This biometric modality is more used and acceptable by the users due to acquiring device is comparatively small. Moreover, the recognition accuracy is relatively very high to others biometric recognition system based on the retina, ear shape, iris, etc. (
<xref rid="ref-3" ref-type="bibr">Borra, Reddy & Reddy, 2018</xref>
).</p>
<p>The finger vein biometric modality is usually used in biometric recognition because of many advantages compared other modality, (1) it is simple and easy to use: easily acquired using sensor capable of capturing or the NIR (Near-Infrared) light source; (2) it is high security: the vein structure is hidden inside the skin and the possibility of spoof the human recognition system is very complex; (3) the veins of each individual are unique and different (
<xref rid="ref-32" ref-type="bibr">Yang & Zhang, 2012</xref>
). The fingervein recognition is based on human veins characteristic for identification or verification of the individual (
<xref rid="ref-27" ref-type="bibr">Tome, Vanoni & Marcel, 2014</xref>
). As result, human and computer performance on fingervein recognition is a studies topic with both scientific research value and widely application prospects (
<xref rid="ref-14" ref-type="bibr">Kang et al., 2019</xref>
).</p>
<p>Face recognition is a biometric recognition technology based on human facial feature information for identification or verification. The algorithms using facial recognition are sensitive to variance in facial expressions and accessories, uncontrolled illumination, poses. In this regard, human and computer performance on facial identification is a research topic with both scientific research value and widely application prospects (
<xref rid="ref-17" ref-type="bibr">Mane & Shah, 2019</xref>
).</p>
<p>In order to overcome the limitation concerned with the system based on one modality biometric, the multimodal biometric system increase the robustness and performance against the imposters attack and environment variations. This system is classified as multi-instance, multi-sensor, multi-algorithm, multi-modal and hybrid systems (
<xref rid="ref-30" ref-type="bibr">Walia et al., 2019</xref>
).</p>
<p>The general structure of biometric recognition system consists of four main stages. First, the acquisition of biometric trait is process of getting a digitalized image of a person using specific capturing device. Second, the pre-processing is allowed to improve overall quality of the captured image. Third, the features data are extracted using different algorithms. Finally, the matching of the extracted characteristics is generally applied in order to perform the recognition of the individual.</p>
<p>The multi-biometric recognition system combines a variety of biometric sources. The main advantage of multimodal system against traditional single biometric is achieving the recognition process more secure and accurate (
<xref rid="ref-28" ref-type="bibr">Unar, Seng & Abbasi, 2014</xref>
). In this regard, researches of multimodal biometric using finger-vein and face images are prevalent and essential recently.</p>
<p>The advantage of combining the fingerprint, finger veins and face is the ability to establish an image acquisition system which can capture fingerprint and finger-vein images simultaneously (they find in almost at the same place) and its devices are less expensive and easier to deploy. Moreover, the face is one of the most natural methods to identify an individual, it does not restrict the movement of the person and its deployment cost is relatively low.</p>
<p>The proposed method deploys the multimodal biometric recognition system that is combined the fingerprint, finger-vein and face images using convolutional neural networks (CNNs) architectures and classifiers based on Softmax and Random forest (RF). Our scheme is efficient to various environmental changes and database types.
<xref ref-type="fig" rid="fig-1">Figure 1</xref>
describes general block diagram of the proposed recognition system.</p>
<fig id="fig-1" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/fig-1</object-id>
<label>Figure 1</label>
<caption>
<title>General block diagram of the proposed recognition system.</title>
</caption>
<graphic xlink:href="peerj-cs-06-248-g001"></graphic>
</fig>
<p>The rest of the paper is separated as follows: Literature review defines a concise description of related research. Proposed system refers to the processes that are part of our proposed algorithm. While Experimental results and discussion elucidates the experimental results, Conclusion concludes the proposed work.</p>
</sec>
<sec sec-type="literature|review">
<title>Literature Review</title>
<p>Many studies has been conducted to investigate multimodal biometric system and its effects on the human recognition.
<xref rid="ref-22" ref-type="bibr">Ross & Jain (2003)</xref>
presented different levels of fusion and score level fusion on the multimodal biometric system (fingerprint, face, voice and hand geometry) using the sum rule. However, this method needs to conduct experiments on a larger database of users for recognition system.
<xref rid="ref-32" ref-type="bibr">Yang & Zhang (2012)</xref>
have been subjected a fusion of fingerprint and finger vein. These biometric characteristic is extracted using a unified Gabor filter method. The feature level fusion is generated based on supervised local preserving canonical correlation analysis framework. For individual identification, the nearest neighborhood classifier is applied. However, the performance is evaluated using a collected database which contains just 640 fingerprint images and 640 finger-vein images.
<xref rid="ref-25" ref-type="bibr">Son & Lee (2005)</xref>
have been subjected a fusion of face and iris using DWT and DLDA method. Each experiment is repeated at least 20 times for reducing the variation. Though, this algorithm is not compared with other state of the art methods. As well, it is not verified on a large number of data.
<xref rid="ref-21" ref-type="bibr">Ross & Govindarajan (2005)</xref>
presented multimodal biometric system that uses hand and face at feature level for biometric recognition purposes. Moreover, the experiments have been tested on intra-modal and inter-modal fusion with R, G, B channels. The drawbacks of this system, it does not accord incompatible feature sets (e.g., eigen-coefficients of face and minutiae points of fingerprints) to be combined and it is difficult to predict the best fusion strategy given a scenario. A novel fingerprint and finger vein identification system by concatenating the feature vectors are achieved by
<xref rid="ref-16" ref-type="bibr">Ma, Popoola & Sun (2015)</xref>
. In this study, the extracted feature vectors of both fingerprint and finger vein images are concatenated in order to combine the classifiers recognition results at the decision level. Though, the accuracy of this technique does not satisfy the requirement of many real-world applications, where it suffers from significant performance translation and rotation invariant.
<xref rid="ref-10" ref-type="bibr">Huang et al. (2015)</xref>
introduced an adaptive bimodal sparse representation based on classification, that is, adaptive face and ear using bimodal recognition system based on sparse coding, where the qualities of weighted feature is selected. This system requires to pre-process each trait biometric before extracting the features. Furthermore, the recognition accuracy needs to increase.</p>
<p>
<xref rid="ref-31" ref-type="bibr">Yang et al. (2018)</xref>
presented a multi-biometric system cancelable using fingerprint and finger-vein, which combines the minutia points of fingerprint and finger-vein image feature based on a feature-level of three fusion techniques. However, the effect of noisy data on the performance of the system is not included.
<xref rid="ref-29" ref-type="bibr">Vishi & Mavroeidis (2018)</xref>
reported a fusion of fingerprint and finger-vein for identification using the combinations of score normalization (min-max,
<italic>z</italic>
-score, hyperbolic tangent) and fusion methods (minimum score, maximum score, simple sum, user weighting). The pre-processing stage is not used in this algorithm. Thus, the recognition accuracy can be decreased.
<xref rid="ref-12" ref-type="bibr">Jain, Hong & Kulkarni (1999)</xref>
introduced a multimodal biometric system using face, fingerprint and voice. Moreover, the different fusion techniques and normalization methods of fingerprint, hand geometry and face biometric sources are achieved by
<xref rid="ref-13" ref-type="bibr">Jain, Nandakumar & Ross (2005)</xref>
. The drawback of these methods, they need to be tested on a large dataset in real operating environment.
<xref rid="ref-24" ref-type="bibr">Soleymani et al. (2018)</xref>
are suggested a multimodal biometric system with face, iris and fingerprint using multiple streams of modality-specific CNNs. In this algorithm, some complexity also exists in multimodal recognition system which reduces its acceptability in many areas. Further, multimodal biometric system based on Iris, finger vein and fingerprint was investigated (
<xref rid="ref-30" ref-type="bibr">Walia et al., 2019</xref>
). In this method, individual classifier score estimation along with its performance optimization using evolutionary backtracking search optimization algorithm (BSA) is presented. In addition, the core design of the fusion model using proportional conflict redistribution rules (PCR-6) is proposed. The accuracy of 98.43% and an error rate of 1.57% have been achieved. However, the enhancement biometric quality and Experimentation with real multimodal dataset is not used in this system.</p>
<p>There exist only a few works about a multimodal biometric system that includes fingerprint, fingervein and face.
<xref rid="ref-19" ref-type="bibr">Rajesh & Selvarajan (2017)</xref>
are proposed an algorithm for biometric recognition using fingerprint and fingervein and face. They used score-level fusion to fuse these biometrics traits but they did not evaluate such a system against to others methods. As well, they have not provided information on the databases and the number of users in the study.</p>
</sec>
<sec sec-type="Proposed|system">
<title>Proposed System</title>
<sec>
<title>Fingerprint recognition system</title>
<p>This section describes the detail about the proposed fingerprint recognition system using CNN-Softmax. In this work, our proposed method includes of the following three major stages: (1) pre-processing the fingerprint image; (2) feature extraction with CNN model; (3) using Softmax as a classifier. For pre-processing step, Soble and TopHat filtering method improved the quality of the image by limiting the contrast. After that,
<italic>K</italic>
-means and DBSCAN approaches are applied to classify the image into foreground and background region (
<xref rid="ref-6" ref-type="bibr">Cherrat, Alaoui & Bouzahir, 2019</xref>
). In addition, the Canny method (
<xref rid="ref-5" ref-type="bibr">Canny, 1987</xref>
) and the inner rectangle are adopted to extract the Region of interest (ROI) of fingerprint segmented. After this step, the features are extracted from the pre-processing fingerprint image using the CNN architecture.</p>
<p>The CNN is a convolutional neural network based on deep supervised learning model. In this regard, CNN can be viewed an automatic feature extractor and a trainable classifier (
<xref rid="ref-2" ref-type="bibr">Bhanu & Kumar, 2017</xref>
). As shown in
<xref ref-type="fig" rid="fig-2">Fig. 2</xref>
, the configuration details of the proposed fingerprint-CNN architecture. The proposed model has five convolutional layers and three max-pooling layers which can be are computed using
<xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>
. In addition, three rectified linear unit (ReLU) are used to our system which can be defined as
<xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref>
.</p>
<disp-formula id="eqn-1">
<label>(1)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e001.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$${O_n} = \mathop \sum \limits_{i = 1}^{N - 1} {x_i}{f_{n - i}}$$\end{document}</tex-math>
<mml:math id="mml-eqn-1">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mrow>
<mml:mo movablelimits="false"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</alternatives>
</disp-formula>
<p>where
<italic>O</italic>
is the output map,
<italic>x</italic>
is input map,
<italic>f</italic>
is the filter and
<italic>N</italic>
is number of elements in
<italic>x</italic>
.</p>
<disp-formula id="eqn-2">
<label>(2)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e002.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M2">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$$f\left( x \right) = \max \left( {0,{x}} \right)$$\end{document}</tex-math>
<mml:math id="mml-eqn-2">
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo movablelimits="true" form="prefix">max</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
</disp-formula>
<p>where
<italic>x</italic>
is the input to a neuron.</p>
<fig id="fig-2" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/fig-2</object-id>
<label>Figure 2</label>
<caption>
<title>The architecture of the proposed fingerprint-CNN model.</title>
</caption>
<graphic xlink:href="peerj-cs-06-248-g002"></graphic>
</fig>
<p>The Softmax function can be used to the fully convolutional layer output, as shown in
<xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>
.</p>
<disp-formula id="eqn-3">
<label>(3)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e003.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M3">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$$S\left( {r,i} \right) = - \log \left( {\displaystyle{{{e^{{z_i}}}} \over {\mathop \sum \nolimits_{k = 1}^N {e^{{z_j}}}}}} \right)$$\end{document}</tex-math>
<mml:math id="mml-eqn-3">
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mi>log</mml:mi>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mstyle displaystyle="true" scriptlevel="0">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo movablelimits="false"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
</disp-formula>
<p>when the vector of output neurons is set to
<italic>r</italic>
, the probability of the neurons appropriate to the
<italic>i</italic>
<sup>th</sup>
class is provided by separation the value of the
<italic>i</italic>
<sup>th</sup>
(
<italic>i</italic>
= 1…
<italic>j</italic>
) element by the sum of the values of all elements.</p>
<p>The structure is described as follows: (1) L1: the input layer data size of 88 × 88, which is the size of input pre-processing fingerprint images; (2) L1M1: first hidden layer, composed by 64 convolutional filters of size 3 × 3 × 1, ReLU activation function and a max-pooling layer of size 2 × 2. This layer changes the input data into CL1M1 = (42 × 42 × 64) features; (3) L2M2: second hidden layer, composed by 128 convolutional filters of size 3 × 3 × 64, ReLU activation function and a max-pooling layer of size 2 × 2. This layer changes the input data into CL2M2 = (19 × 19 × 128) features; (4) L3M3: third hidden layer, composed by 128 convolutional filter of size 3 × 3 × 128, ReLU activation function and a max-pooling layer of size 2 × 2. In order to disconnect the connections between the first layer and the next layers the dropout probability (19) of 20% is adopted. This layer transforms the input data into CL3M3 = (9 × 9 × 256) features; (5) L4M4: forth hidden layer namely fully connected layer, represented the flattening process, which is converted all the resultant two-dimensional arrays into a single long continuous linear vector. The features size of input data is 1 × 1 × 20,736; (6) L5M5: final hidden layer, this layer represented the feature descriptor of the finger vein for recognition to describe it with informative features. The Softmax function is used to predict labels of the input patterns.</p>
</sec>
<sec>
<title>Fingervein recognition system</title>
<p>In this section, the proposed algorithm for fingervein recognition using CNN as a feature extractor is described. Our proposed method consist in three phases: (1) Canny method and the inner rectangle are used to obtain the ROI of finger vein image; (2) the exposure fusion framework (
<xref rid="ref-34" ref-type="bibr">Ying et al., 2017</xref>
) is applied to improve the contrast of the image by limiting the contrast amplification in the different region of the image. The result of finger vein image using Canny edge detector and contrast techniques such as contrast limited adaptive histogram equalization (
<xref rid="ref-20" ref-type="bibr">Reza, 2004</xref>
) and dynamic histogram equalization (
<xref rid="ref-1" ref-type="bibr">Abdullah-Al-Wadud et al., 2007</xref>
) is shown in
<xref ref-type="fig" rid="fig-3">Fig. 3</xref>
. (3) feature extraction based on CNN and (4) RF is employed as a classifier for fingervein classification. The proposed model has five convolutional layers where three followed by max-pooling and three ReLU. The RF classifier is used to predict labels of the input patterns.
<xref rid="table-1" ref-type="table">Table 1</xref>
summarizes the characteristics of the proposed fingervein-CNN configuration.</p>
<fig id="fig-3" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/fig-3</object-id>
<label>Figure 3</label>
<caption>
<title>Pre-processed finger-vein image using different enhanced algorithm from the Avera databases.</title>
<p>(A), (F), (K), (P) Original image. (B), (G), (L), (Q) Cropped image. (C), (H), (M), (R) CLAHE enhanced. (D), (I), (N), (S) DHE enhanced. (E), (J), (O), (T) Proposed enhanced using EFF.</p>
</caption>
<graphic xlink:href="peerj-cs-06-248-g003"></graphic>
</fig>
<table-wrap id="table-1" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-1</object-id>
<label>Table 1</label>
<caption>
<title>Proposed fingervein-CNN configuration.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g004"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Type</th>
<th rowspan="1" colspan="1">Number of filter</th>
<th rowspan="1" colspan="1">Size of feature map</th>
<th rowspan="1" colspan="1">Filter size/stride</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">64</td>
<td rowspan="1" colspan="1">58 × 150 × 1</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">58 × 150 × 1</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">29 × 75 × 64</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">128</td>
<td rowspan="1" colspan="1">27 × 73 × 128</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">27 × 73 × 128</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">13 × 36 × 128</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">256</td>
<td rowspan="1" colspan="1">11 × 34 × 256</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">11 × 34 × 256</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">5 × 17 × 256</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fully-connected</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">1 × 21760</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fully-connected</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">1 × 106</td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
<sec>
<title>Random forest classifier</title>
<p>The RF algorithm proposed by
<xref rid="ref-4" ref-type="bibr">Breiman (2001)</xref>
, is an ensemble learning technique for regression and classification. At each split, RF consists of bagging (bootstrap aggregating) of
<italic>T</italic>
decision trees with a randomized selection of features. Given a training data
<italic>X</italic>
, the RF algorithm is presented as follows: (i) at each
<italic>T</italic>
, generate a bootstrap sample with replacement from the original training data. (ii) Choose a random set of features using each bootstrap sample data, at each internal node. Furthermore, randomly select
<italic>Y</italic>
predictors and pick the best split based on only the
<italic>Y</italic>
predictors rather than all predictors. (iii) Aggregate the set of estimated decision trees in order to get a single one.</p>
</sec>
<sec>
<title>Face recognition system</title>
<p>In this section, the proposed algorithm for face recognition using CNN as a feature extractor is described. Our proposed method consist in two phases: feature extraction based on CNN and employing Softmax as a classifier for face classification.
<xref rid="table-2" ref-type="table">Table 2</xref>
shows the configuration details of the proposed CNN architecture using face image. The proposed model has five convolutional layers where three followed by max-pooling and three ReLU. In order to disconnect the connections between the first layer and the next layers the dropout probability (
<xref rid="ref-26" ref-type="bibr">Srivastava et al., 2014</xref>
) of 20% is adopted. In addition, the dropout probability of 10% between the second layer and the next layers.</p>
<table-wrap id="table-2" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-2</object-id>
<label>Table 2</label>
<caption>
<title>Proposed face-CNN configuration.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g005"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Type</th>
<th rowspan="1" colspan="1">Number of filter</th>
<th rowspan="1" colspan="1">Size of feature map</th>
<th rowspan="1" colspan="1">Filter size/stride</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">32</td>
<td rowspan="1" colspan="1">88 × 88 × 1</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">88 × 88 × 1</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">44 × 44 × 32</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">64</td>
<td rowspan="1" colspan="1">42 × 42 × 64</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">42 × 42 × 64</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">21 × 21 × 64</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Convolution</td>
<td rowspan="1" colspan="1">128</td>
<td rowspan="1" colspan="1">19 × 19 × 128</td>
<td rowspan="1" colspan="1">3 × 3/1</td>
</tr>
<tr>
<td rowspan="1" colspan="1">ReLU</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">19 × 19 × 128</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Max-pooling</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">9 × 9 × 128</td>
<td rowspan="1" colspan="1">3 × 3/2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fully-connected</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">1 × 10388</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fully-connected</td>
<td rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1">1 × 106</td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
<sec>
<title>Feature extraction fusion</title>
<p>In this section, we introduce our proposed method score level fusion technique based on the matching score level fusion. This matching score indicates better proximity of characteristic vector with the template.</p>
<p>The fused score level is based on the weighted sum and weighted product as shown in
<xref ref-type="disp-formula" rid="eqn-4">Eqs. (4)</xref>
and
<xref ref-type="disp-formula" rid="eqn-5">(5)</xref>
. If the fused score value providing of the query fingerprint, finger vein and face is greater than or equal to the decision threshold value. Then, the person is accepted, otherwise is rejected (
<xref rid="ref-23" ref-type="bibr">Singh, Singh & Ross, 2019</xref>
).</p>
<disp-formula id="eqn-4">
<label>(4)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e004.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M4">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$${\rm{Scor}}{{\rm{e}}_{{\rm{ws}}}}{\rm{\ =\ }}\;{w_{\rm{1}}}{S_{{\rm{FP}}}}{\rm{\ +\ }}{w_{\rm{2}}}{S_{{\rm{FV}}}}{\rm{\ +\ }}{w_{\rm{3}}}{S_{{\rm{FA}}}}$$\end{document}</tex-math>
<mml:math id="mml-eqn-4">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">S</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
<mml:mi mathvariant="normal">o</mml:mi>
<mml:mi mathvariant="normal">r</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">e</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">w</mml:mi>
<mml:mi mathvariant="normal">s</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mtext> </mml:mtext>
<mml:mo>=</mml:mo>
<mml:mtext> </mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mtext> </mml:mtext>
<mml:mo>+</mml:mo>
<mml:mtext> </mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">V</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mtext> </mml:mtext>
<mml:mo>+</mml:mo>
<mml:mtext> </mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">A</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</alternatives>
</disp-formula>
<disp-formula id="eqn-5">
<label>(5)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e005.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M5">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$${\rm{Scor}}{{\rm{e}}_{{\rm{wp}}}}\,{\rm{ = }}\;{S_{{\rm{FP}}}}^{{w_{\rm{1}}}} \times {S_{{\rm{FV}}}}^{{w_{\rm{2}}}} \times {S_{{\rm{FA}}}}^{{w_{\rm{3}}}}$$\end{document}</tex-math>
<mml:math id="mml-eqn-5">
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">S</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
<mml:mi mathvariant="normal">o</mml:mi>
<mml:mi mathvariant="normal">r</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">e</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">w</mml:mi>
<mml:mi mathvariant="normal">p</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>=</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mspace width="thickmathspace"></mml:mspace>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">V</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">A</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:math>
</alternatives>
</disp-formula>
<p>where
<italic>S</italic>
<sub>FP</sub>
,
<italic>S</italic>
<sub>FV</sub>
,
<italic>S</italic>
<sub>FA</sub>
indicate the scores of biometric matcher,
<italic>w</italic>
<sub>1</sub>
,
<italic>w</italic>
<sub>2</sub>
,
<italic>w</italic>
<sub>3</sub>
are the weight value over a range of (0, 1) and according to sum of
<italic>w</italic>
<sub>1</sub>
,
<italic>w</italic>
<sub>2</sub>
,
<italic>w</italic>
<sub>3</sub>
is always 1.</p>
</sec>
<sec>
<title>Data augmentation</title>
<p>Data augmentation is one of the methods for reducing the effects of overfitting problems in CNN architecture. This technique is employed to increase the amount of training data based on image translation, rotation and cropping process. Many previous works have been successfully used data-augmentation method. We implemented the data augmentation as expand to the work in
<xref rid="ref-15" ref-type="bibr">Krizhevsky, Sutskever & Hinton (2012)</xref>
such as the rotation and the translation (left, right, up and down) (
<xref rid="ref-18" ref-type="bibr">Park et al., 2016</xref>
). For SDUMLA-HMT database augmentation, we were augmented that is two times larger than the original database.</p>
</sec>
</sec>
<sec sec-type="results|discussion">
<title>Experimental Results and Discussion</title>
<p>The experimental operation platform in this study is described as follows: the host configuration: Intel Core i7 − 4770 processor, 8Go RAM and NVIDIA GeForce GTX 980 4GO GPU, runtime environment: Ubuntu 14.04 LTS (64 bit). In order to better verify our algorithm, the following classification methods are adopted in the experiment: support vector machine (SVM) (
<xref rid="ref-8" ref-type="bibr">Cortes & Vapnik, 1995</xref>
), RF (
<xref rid="ref-4" ref-type="bibr">Breiman, 2001</xref>
), logistic regression (LR) (
<xref rid="ref-9" ref-type="bibr">Hosmer, Lemeshow & Sturdivant, 2013</xref>
), fingervein biometric system (
<xref rid="ref-11" ref-type="bibr">Itqan et al., 2016</xref>
) and Multimodal biometric system using fingerprint, fingervein and face (
<xref rid="ref-19" ref-type="bibr">Rajesh & Selvarajan, 2017</xref>
). These algorithms were compared to each other. In order to validate the proposed algorithm, the results have been tested on the public on SDUMLA-HMT (
<xref rid="ref-33" ref-type="bibr">Yin, Liu & Sun, 2011</xref>
) database which includes real multimodal data of fingerprint, fingervein and face images. The total number of training images was 41,340 and we divided them into training, validation and test sets. The divided data set used in the experiment is shown in
<xref rid="table-3" ref-type="table">Table 3</xref>
.</p>
<table-wrap id="table-3" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-3</object-id>
<label>Table 3</label>
<caption>
<title>Dataset structure of fingerprint, fingervein and face databases.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g006"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1">SDUMLA-HMT database</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Class number</td>
<td rowspan="1" colspan="1">106</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Image number</td>
<td rowspan="1" colspan="1">41,340</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Training</td>
<td rowspan="1" colspan="1">33,072</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Validation</td>
<td rowspan="1" colspan="1">4,134‬</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Test</td>
<td rowspan="1" colspan="1">4,134</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>The performance measure is accuracy rate as defined by
<xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref>
.</p>
<disp-formula id="eqn-6">
<label>(6)</label>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-e006.jpg" mimetype="image" mime-subtype="png" position="float" orientation="portrait"></graphic>
<tex-math id="M6">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$$\rm{Accuracy}\; = \; \displaystyle{{\rm{TP} + \rm{TN}} \over {\rm{TP} + \rm{TN} + \rm{FP} + \rm{FN}}}\; \times 100$$\end{document}</tex-math>
<mml:math id="mml-eqn-6">
<mml:mrow>
<mml:mi mathvariant="normal">A</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
<mml:mi mathvariant="normal">u</mml:mi>
<mml:mi mathvariant="normal">r</mml:mi>
<mml:mi mathvariant="normal">a</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
<mml:mi mathvariant="normal">y</mml:mi>
</mml:mrow>
<mml:mspace width="thickmathspace"></mml:mspace>
<mml:mo>=</mml:mo>
<mml:mspace width="thickmathspace"></mml:mspace>
<mml:mstyle displaystyle="true" scriptlevel="0">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mspace width="thickmathspace"></mml:mspace>
<mml:mo>×</mml:mo>
<mml:mn>100</mml:mn>
</mml:mstyle>
</mml:math>
</alternatives>
</disp-formula>
<p>where True Positive Rate (TP) is the probability of authorized users that are recognized correctly over the total number tested, True negative rate (TN) is the probability of authorized users that are not recognized over the total number tested. False positive rate (FP) describes the percentage of unauthorized users that are recognized to the total number tested. False negative rate (FN) describes the percentage of unauthorized users that are not recognized falsely to the total number tested.</p>
<p>As can be seen from
<xref rid="table-4" ref-type="table">Table 4</xref>
, the proposed fingerprint recognition using CNN with dropout method leads to a significant performance improvement on real database multimodal.</p>
<table-wrap id="table-4" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-4</object-id>
<label>Table 4</label>
<caption>
<title>The training set result of proposed fingerprint recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g007"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="2" colspan="1">Images</th>
<th colspan="2" rowspan="1">Train set without dropout</th>
<th colspan="2" rowspan="1">Training set with dropout</th>
</tr>
<tr>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">98.96</td>
<td rowspan="1" colspan="1">3.65</td>
<td rowspan="1" colspan="1">99.31</td>
<td rowspan="1" colspan="1">2.35</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Enhanced
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">99.49</td>
<td rowspan="1" colspan="1">1.93</td>
<td rowspan="1" colspan="1">99.56</td>
<td rowspan="1" colspan="1">1.23</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed Enhanced
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">99.13</td>
<td rowspan="1" colspan="1">2.16</td>
<td rowspan="1" colspan="1">99.63</td>
<td rowspan="1" colspan="1">1.17</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>In particular, the highest accuracy gain was obtained by dropout method on four datasets for training and test set. Moreover, the least loss is achieved based on database using the dropout method. For training set, it can be noted from the
<xref rid="table-4" ref-type="table">Table 4</xref>
that the accuracy of 99.13% is augmented to 99.63% and the lost rate of 2.16% is reduced to 1.17% in the proposed method due to add the drop function in our system. For test set, based on the results yielded in
<xref rid="table-5" ref-type="table">Table 5</xref>
, the accuracy of 99.33% is augmented to 99.48% and the lost rate of 2.16% to 2.03% in the proposed fingerprint identification method.</p>
<table-wrap id="table-5" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-5</object-id>
<label>Table 5</label>
<caption>
<title>The test set result of proposed fingerprint recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g008"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="2" colspan="1">Images</th>
<th colspan="2" rowspan="1">Test set without dropout</th>
<th colspan="2" rowspan="1">Test set with dropout</th>
</tr>
<tr>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">97.06</td>
<td rowspan="1" colspan="1">9.12</td>
<td rowspan="1" colspan="1">97.66</td>
<td rowspan="1" colspan="1">5.71</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Enhanced
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">98.29</td>
<td rowspan="1" colspan="1">5.68</td>
<td rowspan="1" colspan="1">99.16</td>
<td rowspan="1" colspan="1">3.14</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed enhanced
<sub>FP</sub>
</td>
<td rowspan="1" colspan="1">99.33</td>
<td rowspan="1" colspan="1">2.16</td>
<td rowspan="1" colspan="1">99.48</td>
<td rowspan="1" colspan="1">2.03</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>The comparison of pre-processed algorithms with the proposed fingervein system are shown in
<xref rid="table-6" ref-type="table">Tables 6</xref>
and
<xref rid="table-7" ref-type="table">7</xref>
. The proposed work gives the highest average accuracy for training set with 99.09% and least loss with 2.69%. For test set, also the highest average accuracy of 99.27% and least loss with 2.05 is obtained using our algorithm.</p>
<table-wrap id="table-6" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-6</object-id>
<label>Table 6</label>
<caption>
<title>The training set result of proposed fingervein recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g009"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Images</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">96.58</td>
<td rowspan="1" colspan="1">19.12</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Cropped
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">97.09</td>
<td rowspan="1" colspan="1">15.22</td>
</tr>
<tr>
<td rowspan="1" colspan="1">CLAHE
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">98.45</td>
<td rowspan="1" colspan="1">6.36</td>
</tr>
<tr>
<td rowspan="1" colspan="1">DHE
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">97.65</td>
<td rowspan="1" colspan="1">11.12</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed enhanced
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">99.09</td>
<td rowspan="1" colspan="1">2.69</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="table-7" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-7</object-id>
<label>Table 7</label>
<caption>
<title>The test set result of proposed fingervein recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g010"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Images</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">96.98</td>
<td rowspan="1" colspan="1">12.08</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Cropped
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">97.89</td>
<td rowspan="1" colspan="1">9.32</td>
</tr>
<tr>
<td rowspan="1" colspan="1">CLAHE
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">98.97</td>
<td rowspan="1" colspan="1">4.23</td>
</tr>
<tr>
<td rowspan="1" colspan="1">DHE
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">98.25</td>
<td rowspan="1" colspan="1">5.52</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed enhanced
<sub>FV</sub>
</td>
<td rowspan="1" colspan="1">99.27</td>
<td rowspan="1" colspan="1">2.05</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>The results of
<xref rid="table-8" ref-type="table">Tables 8</xref>
and
<xref rid="table-9" ref-type="table">9</xref>
show that dropout method plays an important role in increasing the accuracy of the proposed face recognition system. For training set, the accuracy of 99.25% is augmented to 99.55% and the lost rate of 1.96% is lowered to 1.77%. For test set, the accuracy of 99.05% is augmented to 99.13% and the lost rate of 2.10% is lowered to 2.17%.</p>
<table-wrap id="table-8" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-8</object-id>
<label>Table 8</label>
<caption>
<title>The training set result of proposed face recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g011"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="2" colspan="1">Images</th>
<th colspan="2" rowspan="1">Train set without dropout</th>
<th colspan="2" rowspan="1">Training set with dropout</th>
</tr>
<tr>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>Fa</sub>
</td>
<td rowspan="1" colspan="1">99.25</td>
<td rowspan="1" colspan="1">1.96</td>
<td rowspan="1" colspan="1">99.55</td>
<td rowspan="1" colspan="1">1.77</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="table-9" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-9</object-id>
<label>Table 9</label>
<caption>
<title>The test set result of proposed face recognition using CNN.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g012"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="2" colspan="1">Images</th>
<th colspan="2" rowspan="1">Test set without Dropout</th>
<th colspan="2" rowspan="1">Test set with dropout</th>
</tr>
<tr>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
<th rowspan="1" colspan="1">Loss (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Original
<sub>Fa</sub>
</td>
<td rowspan="1" colspan="1">99.05</td>
<td rowspan="1" colspan="1">2.10</td>
<td rowspan="1" colspan="1">99.13</td>
<td rowspan="1" colspan="1">2.27</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>From
<xref rid="table-10" ref-type="table">Table 10</xref>
, we compared the results using the SVM, LR, RF with CNN using the proposed fingerprint system to show the highest accuracy of 99.48% is tacked using the Softmax classifier. As shown in this Table, the RF classifier gives the highest accuracy of 99.53% using the proposed fingervein system using CNN architecture. Also based on the results yielded in
<xref rid="table-10" ref-type="table">Table 10</xref>
it can be argued that the Softmax classifier gives the highest accuracy of 99.13% based on the proposed face system using the CNN model.</p>
<table-wrap id="table-10" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-10</object-id>
<label>Table 10</label>
<caption>
<title>The result of proposed system recognition unimodal biometric using CNN with different classifiers.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g013"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Classifiers</th>
<th rowspan="1" colspan="1">Fingerprint</th>
<th rowspan="1" colspan="1">Finger vein</th>
<th rowspan="1" colspan="1">Face</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">CNN & SoftMax</td>
<td rowspan="1" colspan="1">99.48%</td>
<td rowspan="1" colspan="1">99.27%</td>
<td rowspan="1" colspan="1">99.13%</td>
</tr>
<tr>
<td rowspan="1" colspan="1">CNN & SVM</td>
<td rowspan="1" colspan="1">97.65%</td>
<td rowspan="1" colspan="1">99.33%</td>
<td rowspan="1" colspan="1">97.88%</td>
</tr>
<tr>
<td rowspan="1" colspan="1">CNN & LR</td>
<td rowspan="1" colspan="1">85.61%</td>
<td rowspan="1" colspan="1">84.14%</td>
<td rowspan="1" colspan="1">92.43%</td>
</tr>
<tr>
<td rowspan="1" colspan="1">CNN & RF</td>
<td rowspan="1" colspan="1">97.33%</td>
<td rowspan="1" colspan="1">99.53%</td>
<td rowspan="1" colspan="1">91.95%</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>From
<xref rid="table-11" ref-type="table">Table 11</xref>
, it is clear that the highest recognition rate is obtained when weighted sum is used for fusion rule.
<xref rid="table-12" ref-type="table">Table 12</xref>
presents computational time for fusion method of database.
<xref rid="table-13" ref-type="table">Table 13</xref>
shows the comparison between the proposed system unimodal, bimodal and multimodal biometric that using CNN on database. The proposed fingerprint, fingervein and face as a bimodal system can be used for recognition with acceptable identification results comparing with other unimodal systems. The proposed multimodal biometric system is increased the recognition accuracy than the unimodal and bimodal identification system, where the accuracy rate is 99.49%. Although existing biometric method (
<xref rid="ref-30" ref-type="bibr">Walia et al., 2019</xref>
) is able to obtain 99.61% of recognition rate, it is still slower than our proposed fusion method in terms of computational time of 69 ms.</p>
<table-wrap id="table-11" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-11</object-id>
<label>Table 11</label>
<caption>
<title>The result of proposed recognition systems using CNN with rules fusion.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g014"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="2" colspan="1">Algorithms</th>
<th colspan="2" rowspan="1">Rules fusion</th>
</tr>
<tr>
<th rowspan="1" colspan="1">Weighted sum</th>
<th rowspan="1" colspan="1">Weighted product</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Fingerprint
<sub>CNN</sub>
& Fingervein
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.59</td>
<td rowspan="1" colspan="1">99.58</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fingerprint
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.30</td>
<td rowspan="1" colspan="1">99.28</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fingervein
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.20</td>
<td rowspan="1" colspan="1">99.17</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fingerprint
<sub>CNN</sub>
& Fingervein
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.73</td>
<td rowspan="1" colspan="1">99.70</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="table-12" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-12</object-id>
<label>Table 12</label>
<caption>
<title>Computational time (ms) for fusion method of database.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g015"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Algorithms</th>
<th rowspan="1" colspan="1">Time (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Fingerprint
<sub>ANN</sub>
& Fingervein
<sub>ANN</sub>
& Face
<sub>ANN</sub>
(
<xref rid="ref-19" ref-type="bibr">Rajesh & Selvarajan, 2017</xref>
)</td>
<td rowspan="1" colspan="1">130</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Score level fusion model (
<xref rid="ref-30" ref-type="bibr">Walia et al., 2019</xref>
)</td>
<td rowspan="1" colspan="1">580</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed fingerprint
<sub>CNN</sub>
& Fingervein
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">69</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="table-13" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/table-13</object-id>
<label>Table 13</label>
<caption>
<title>The accuracy rate for proposed systems and different recognition biometric system results.</title>
</caption>
<alternatives>
<graphic xlink:href="peerj-cs-06-248-g016"></graphic>
<table frame="hsides" rules="groups" content-type="text">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">Algorithms</th>
<th rowspan="1" colspan="1">Accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Enhanced fingerprint
<sub>CNN</sub>
using
<xref rid="ref-6" ref-type="bibr">Cherrat, Alaoui & Bouzahir (2019)</xref>
</td>
<td rowspan="1" colspan="1">99.48</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Enhanced fingervein
<sub>CNN</sub>
using
<xref rid="ref-34" ref-type="bibr">Ying et al. (2017)</xref>
</td>
<td rowspan="1" colspan="1">99.53</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fingervein
<sub>CNN</sub>
(
<xref rid="ref-11" ref-type="bibr">Itqan et al., 2016</xref>
)</td>
<td rowspan="1" colspan="1">96.65</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.13</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed fingerprint
<sub>CNN</sub>
& Fingervein
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.51</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed fingerprint
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.31</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed fingervein
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.33</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Fingerprint
<sub>ANN</sub>
& Fingervein
<sub>ANN</sub>
& Face
<sub>ANN</sub>
(
<xref rid="ref-19" ref-type="bibr">Rajesh & Selvarajan, 2017</xref>
)</td>
<td rowspan="1" colspan="1">99.23</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Score level fusion model (
<xref rid="ref-30" ref-type="bibr">Walia et al., 2019</xref>
)</td>
<td rowspan="1" colspan="1">99.61</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Proposed fingerprint
<sub>CNN</sub>
& Fingervein
<sub>CNN</sub>
& Face
<sub>CNN</sub>
</td>
<td rowspan="1" colspan="1">99.49</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>Finally, we can conclude from these results that the proposed multimodal system is superior to other methods because:
<list list-type="order">
<list-item>
<p>The proposed enhanced fingerprint and finger vein patterns are significantly clearly distinguishable and more prominent in their others enhanced versions. Therefore, the proposed methods are typically able to guarantee a high identification rate.</p>
</list-item>
<list-item>
<p>The recognition accuracy based on dropout method is better than using only the dataset method.</p>
</list-item>
<list-item>
<p>CNN approach can usually provide better performances than using combinations between different processes such as windowing, extracting features, etc. Thus, the recognition biometric system based on CNN technique can surpassed other classical and complicated techniques.</p>
</list-item>
<list-item>
<p>The proposed multimodal algorithm have higher accuracy to identify the person and ensure that its information or data is safer compared to system based on single or bimodal biometrics.</p>
</list-item>
</list>
</p>
</sec>
<sec sec-type="conclusions">
<title>Conclusion</title>
<p>A system for human recognition using CNN models and a multimodal biometric identification system based on the fusion of fingerprint, fingervein and face images has been introduced in this work. The experimental results on real multimodal database have shown that the overall performance of the proposed multimodal system is better than unimodal and bimodal biometric systems based on CNN and different classifiers regarding identification. From the results obtained, we can also conclude that the effect of the pre-processed algorithm improved the accuracy rate of the proposed system. Dropout technique plays an important role for increasing the recognition accuracy, which reduced the loss rate of the system. For future study, extending the proposed algorithm to other applications is a task worth investigating, where it will be tested with a more challenging dataset that contains a large number of subjects.</p>
</sec>
<sec sec-type="supplementary-material" id="supplemental-information">
<title>Supplemental Information</title>
<supplementary-material content-type="local-data" id="supp-1">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-1</object-id>
<label>Supplemental Information 1</label>
<caption>
<title>Preparation face images code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s001.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-2">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-2</object-id>
<label>Supplemental Information 2</label>
<caption>
<title>Preparation fingervein images code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s002.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-3">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-3</object-id>
<label>Supplemental Information 3</label>
<caption>
<title>Preparation fingerprint images code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s003.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-4">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-4</object-id>
<label>Supplemental Information 4</label>
<caption>
<title>Image contrast enhancement CLAHE code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s004.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-5">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-5</object-id>
<label>Supplemental Information 5</label>
<caption>
<title>Image contrast enhancement DHE code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s005.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-6">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-6</object-id>
<label>Supplemental Information 6</label>
<caption>
<title>Image contrast enhancement ying code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s006.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-7">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-7</object-id>
<label>Supplemental Information 7</label>
<caption>
<title>CNN face code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s007.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-8">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-8</object-id>
<label>Supplemental Information 8</label>
<caption>
<title>CNN fingerprint code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s008.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-9">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-9</object-id>
<label>Supplemental Information 9</label>
<caption>
<title>CNN finger vein code.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s009.py">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-10">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-10</object-id>
<label>Supplemental Information 10</label>
<caption>
<title>Scores of fingervein system with CNN 64 × 128 ×256.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s010.csv">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-11">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-11</object-id>
<label>Supplemental Information 11</label>
<caption>
<title>Scores of fingervein System with CNN 64 × 128 ×256.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s011.csv">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-12">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-12</object-id>
<label>Supplemental Information 12</label>
<caption>
<title>Scores of face system with CNN 64 × 128 ×256.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s012.csv">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-13">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-13</object-id>
<label>Supplemental Information 13</label>
<caption>
<title>Configuration neural network model of fingerprint identification system.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s013.json">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-14">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-14</object-id>
<label>Supplemental Information 14</label>
<caption>
<title>Configuration neural network model of face identification system.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s014.json">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="supp-15">
<object-id pub-id-type="doi">10.7717/peerj-cs.248/supp-15</object-id>
<label>Supplemental Information 15</label>
<caption>
<title>Configuration neural network model of finger-vein identification system.</title>
</caption>
<media xlink:href="peerj-cs-06-248-s015.json">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<sec sec-type="additional-information">
<title>Additional Information and Declarations</title>
<fn-group content-type="competing-interests">
<title>Competing Interests</title>
<fn fn-type="COI-statement" id="conflict-1">
<p>The authors declare that they have no competing interests.</p>
</fn>
</fn-group>
<fn-group content-type="author-contributions">
<title>Author Contributions</title>
<fn fn-type="con" id="contribution-1">
<p>
<xref ref-type="contrib" rid="author-1">El mehdi Cherrat</xref>
analyzed the data, performed the computation work, conceived and designed the experiments, performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.</p>
</fn>
<fn fn-type="con" id="contribution-2">
<p>
<xref ref-type="contrib" rid="author-2">Rachid Alaoui</xref>
analyzed the data, performed the computation work, conceived and designed the experiments, performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.</p>
</fn>
<fn fn-type="con" id="contribution-3">
<p>
<xref ref-type="contrib" rid="author-3">Hassane Bouzahir</xref>
analyzed the data, performed the computation work, conceived and designed the experiments, performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.</p>
</fn>
</fn-group>
<fn-group content-type="other">
<title>Data Availability</title>
<fn id="addinfo-1">
<p>The following information was supplied regarding data availability:</p>
<p>SDUMLA-HMT real multimodal database is available at:
<uri xlink:href="http://mla.sdu.edu.cn/info/1006/1195.htm">http://mla.sdu.edu.cn/info/1006/1195.htm</uri>
.</p>
<p>Code and data are available in the
<xref ref-type="supplementary-material" rid="supplemental-information">Supplemental Files</xref>
.</p>
</fn>
</fn-group>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1">
<label>Abdullah-Al-Wadud et al. (2007)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abdullah-Al-Wadud</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kabir</surname>
<given-names>MH</given-names>
</name>
<name>
<surname>Dewan</surname>
<given-names>MAA</given-names>
</name>
<name>
<surname>Chae</surname>
<given-names>O</given-names>
</name>
</person-group>
<article-title>A dynamic histogram equalization for image contrast enhancement</article-title>
<source>IEEE Transactions on Consumer Electronics</source>
<year>2007</year>
<volume>53</volume>
<issue>2</issue>
<fpage>593</fpage>
<lpage>600</lpage>
<pub-id pub-id-type="doi">10.1109/TCE.2007.381734</pub-id>
</element-citation>
</ref>
<ref id="ref-2">
<label>Bhanu & Kumar (2017)</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bhanu</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>A</given-names>
</name>
</person-group>
<source>Deep learning for biometrics</source>
<year>2017</year>
<publisher-loc>Cham</publisher-loc>
<publisher-name>Springer</publisher-name>
<pub-id pub-id-type="doi">10.1007/978-3-319-61657-5</pub-id>
</element-citation>
</ref>
<ref id="ref-3">
<label>Borra, Reddy & Reddy (2018)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Borra</surname>
<given-names>SR</given-names>
</name>
<name>
<surname>Reddy</surname>
<given-names>GJ</given-names>
</name>
<name>
<surname>Reddy</surname>
<given-names>ES</given-names>
</name>
</person-group>
<article-title>An efficient fingerprint identification using neural network and BAT algorithm</article-title>
<source>International Journal of Electrical & Computer Engineering</source>
<year>2018</year>
<volume>8</volume>
<issue>2</issue>
<fpage>1194</fpage>
<lpage>1213</lpage>
<pub-id pub-id-type="doi">10.11591/ijece.v8i2.pp1194-1213</pub-id>
</element-citation>
</ref>
<ref id="ref-4">
<label>Breiman (2001)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Breiman</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Random forests</article-title>
<source>Machine Learning</source>
<year>2001</year>
<volume>45</volume>
<issue>1</issue>
<fpage>5</fpage>
<lpage>32</lpage>
<pub-id pub-id-type="doi">10.1023/A:1010933404324</pub-id>
</element-citation>
</ref>
<ref id="ref-5">
<label>Canny (1987)</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Canny</surname>
<given-names>J</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Fischler</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Firschein</surname>
<given-names>O</given-names>
</name>
</person-group>
<article-title>A computational approach to edge detection</article-title>
<source>Readings in Computer Vision</source>
<year>1987</year>
<publisher-loc>San Francisco</publisher-loc>
<publisher-name>Morgan Kaufmann</publisher-name>
<fpage>184</fpage>
<lpage>203</lpage>
</element-citation>
</ref>
<ref id="ref-6">
<label>Cherrat, Alaoui & Bouzahir (2019)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cherrat</surname>
<given-names>EM</given-names>
</name>
<name>
<surname>Alaoui</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bouzahir</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Improving of fingerprint segmentation images based on K-means and DBSCAN clustering</article-title>
<source>International Journal of Electrical & Computer Engineering</source>
<year>2019</year>
<volume>9</volume>
<issue>4</issue>
<fpage>2425</fpage>
<lpage>2432</lpage>
<pub-id pub-id-type="doi">10.11591/ijece.v9i4.pp2425-2432</pub-id>
</element-citation>
</ref>
<ref id="ref-7">
<label>Cherrat et al. (2017)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cherrat</surname>
<given-names>EM</given-names>
</name>
<name>
<surname>Alaoui</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bouzahir</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Jenkal</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>High-density salt-and-pepper noise suppression using adaptive dual threshold decision based algorithm in fingerprint images</article-title>
<year>2017</year>
<conf-name>2017 Intelligent Systems and Computer Vision (ISCV)</conf-name>
<publisher-loc>Fez</publisher-loc>
<publisher-name>IEEE</publisher-name>
<fpage>1</fpage>
<lpage>4</lpage>
</element-citation>
</ref>
<ref id="ref-8">
<label>Cortes & Vapnik (1995)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cortes</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Vapnik</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Support-vector networks</article-title>
<source>Machine Learning</source>
<year>1995</year>
<volume>20</volume>
<issue>3</issue>
<fpage>273</fpage>
<lpage>297</lpage>
<pub-id pub-id-type="doi">10.1007/BF00994018</pub-id>
</element-citation>
</ref>
<ref id="ref-9">
<label>Hosmer, Lemeshow & Sturdivant (2013)</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hosmer</surname>
<given-names>DW</given-names>
<suffix>Jr</suffix>
</name>
<name>
<surname>Lemeshow</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sturdivant</surname>
<given-names>RX</given-names>
</name>
</person-group>
<source>Applied logistic regression</source>
<year>2013</year>
<edition designator="3">Third Edition</edition>
<comment>Wiley series in probability and statistics</comment>
<publisher-loc>Hoboken</publisher-loc>
<publisher-name>Wiley</publisher-name>
</element-citation>
</ref>
<ref id="ref-10">
<label>Huang et al. (2015)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>An adaptive bimodal recognition framework using sparse coding for face and ear</article-title>
<source>Pattern Recognition Letters</source>
<year>2015</year>
<volume>53</volume>
<fpage>69</fpage>
<lpage>76</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2014.10.009</pub-id>
</element-citation>
</ref>
<ref id="ref-11">
<label>Itqan et al. (2016)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Itqan</surname>
<given-names>KS</given-names>
</name>
<name>
<surname>Syafeeza</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>FG</given-names>
</name>
<name>
<surname>Mustafa</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>YC</given-names>
</name>
<name>
<surname>Ibrahim</surname>
<given-names>MM</given-names>
</name>
</person-group>
<article-title>User identification system based on finger-vein patterns using Convolutional Neural Network</article-title>
<source>ARPN Journal of Engineering and Applied Sciences</source>
<year>2016</year>
<volume>11</volume>
<issue>5</issue>
<fpage>3316</fpage>
<lpage>3319</lpage>
</element-citation>
</ref>
<ref id="ref-12">
<label>Jain, Hong & Kulkarni (1999)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jain</surname>
<given-names>AK</given-names>
</name>
<name>
<surname>Hong</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kulkarni</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>A multimodal biometric system using fingerprint, face and speech</article-title>
<year>1999</year>
<conf-name>Second International Conference of Audio-Video Based Biometric Person Authentication</conf-name>
<conf-loc>Washington, D.C.</conf-loc>
<fpage>10</fpage>
</element-citation>
</ref>
<ref id="ref-13">
<label>Jain, Nandakumar & Ross (2005)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jain</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Nandakumar</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Score normalization in multimodal biometric systems</article-title>
<source>Pattern Recognition</source>
<year>2005</year>
<volume>38</volume>
<issue>12</issue>
<fpage>2270</fpage>
<lpage>2285</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2005.01.012</pub-id>
</element-citation>
</ref>
<ref id="ref-14">
<label>Kang et al. (2019)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>From noise to feature: exploiting intensity distribution as a novel soft biometric trait for finger vein recognition</article-title>
<source>IEEE Transactions on Information Forensics and Security</source>
<year>2019</year>
<volume>14</volume>
<issue>4</issue>
<fpage>858</fpage>
<lpage>869</lpage>
<pub-id pub-id-type="doi">10.1109/TIFS.2018.2866330</pub-id>
</element-citation>
</ref>
<ref id="ref-15">
<label>Krizhevsky, Sutskever & Hinton (2012)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Krizhevsky</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>GE</given-names>
</name>
</person-group>
<article-title>Imagenet classification with deep convolutional neural networks</article-title>
<year>2012</year>
<conf-name>Advances in Neural Information Processing Systems</conf-name>
<conf-loc>Lake Tahoe</conf-loc>
<fpage>1097</fpage>
<lpage>1105</lpage>
</element-citation>
</ref>
<ref id="ref-16">
<label>Ma, Popoola & Sun (2015)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Popoola</surname>
<given-names>OP</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Research of dual-modal decision level fusion for fingerprint and finger vein image</article-title>
<source>International Journal of Biometrics</source>
<year>2015</year>
<volume>7</volume>
<issue>3</issue>
<fpage>271</fpage>
<lpage>285</lpage>
<pub-id pub-id-type="doi">10.1504/IJBM.2015.071949</pub-id>
</element-citation>
</ref>
<ref id="ref-17">
<label>Mane & Shah (2019)</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mane</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Shah</surname>
<given-names>G</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Balas</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Chakrabarti</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Facial recognition, expression recognition, and gender identification</article-title>
<source>Data Management, Analytics and Innovation</source>
<year>2019</year>
<publisher-loc>Singapore</publisher-loc>
<publisher-name>Springer</publisher-name>
<fpage>275</fpage>
<lpage>290</lpage>
</element-citation>
</ref>
<ref id="ref-18">
<label>Park et al. (2016)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Park</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Fingerprint liveness detection using CNN features of random sample patches</article-title>
<year>2016</year>
<conf-name>International Conference of the Biometrics Special Interest Group (BIOSIG)</conf-name>
<publisher-loc>Darmstadt</publisher-loc>
<publisher-name>IEEE</publisher-name>
<fpage>1</fpage>
<lpage>4</lpage>
</element-citation>
</ref>
<ref id="ref-19">
<label>Rajesh & Selvarajan (2017)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rajesh</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Selvarajan</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Score level fusion techniques in multimodal biometric system using CBO-ANN</article-title>
<source>Research Journal of Biotechnology</source>
<year>2017</year>
<volume>12</volume>
<issue>Special Issue II</issue>
<fpage>79</fpage>
<lpage>87</lpage>
</element-citation>
</ref>
<ref id="ref-20">
<label>Reza (2004)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reza</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement</article-title>
<source>Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology</source>
<year>2004</year>
<volume>38</volume>
<issue>1</issue>
<fpage>35</fpage>
<lpage>44</lpage>
<pub-id pub-id-type="doi">10.1023/B:VLSI.0000028532.53893.82</pub-id>
</element-citation>
</ref>
<ref id="ref-21">
<label>Ross & Govindarajan (2005)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ross</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Govindarajan</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Feature level fusion of hand and face biometrics</article-title>
<year>2005</year>
<conf-name>Biometric Technology for Human Identification II</conf-name>
<conf-loc>Orlando, Florida, United States</conf-loc>
<volume>5779</volume>
<publisher-name>International Society for Optics and Photonics</publisher-name>
<fpage>196</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.1117/12.606093</pub-id>
</element-citation>
</ref>
<ref id="ref-22">
<label>Ross & Jain (2003)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ross</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Information fusion in biometrics</article-title>
<source>Pattern Recognition Letters</source>
<year>2003</year>
<volume>24</volume>
<issue>13</issue>
<fpage>2115</fpage>
<lpage>2125</lpage>
<pub-id pub-id-type="doi">10.1016/S0167-8655(03)00079-5</pub-id>
</element-citation>
</ref>
<ref id="ref-23">
<label>Singh, Singh & Ross (2019)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Singh</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Singh</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>A comprehensive overview of biometric fusion</article-title>
<source>Information Fusion</source>
<year>2019</year>
<volume>52</volume>
<fpage>187</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.1016/j.inffus.2018.12.003</pub-id>
</element-citation>
</ref>
<ref id="ref-24">
<label>Soleymani et al. (2018)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Soleymani</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dabouei</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kazemi</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Dawson</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Nasrabadi</surname>
<given-names>NM</given-names>
</name>
</person-group>
<article-title>Multi-level feature abstraction from convolutional neural networks for multimodal biometric identification</article-title>
<year>2018</year>
<conf-name>2018 24th International Conference on Pattern Recognition (ICPR)</conf-name>
<publisher-loc>Beijing</publisher-loc>
<publisher-name>IEEE</publisher-name>
<fpage>3469</fpage>
<lpage>3476</lpage>
</element-citation>
</ref>
<ref id="ref-25">
<label>Son & Lee (2005)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Son</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Biometric authentication system using reduced joint feature vector of iris and face</article-title>
<year>2005</year>
<conf-name>International Conference on Audio- and Video-Based Biometric Person Authentication</conf-name>
<publisher-loc>Berlin, Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
<fpage>513</fpage>
<lpage>522</lpage>
</element-citation>
</ref>
<ref id="ref-26">
<label>Srivastava et al. (2014)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Srivastava</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Krizhevsky</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Salakhutdinov</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Dropout: a simple way to prevent neural networks from overfitting</article-title>
<source>Journal of Machine Learning Research</source>
<year>2014</year>
<volume>15</volume>
<issue>1</issue>
<fpage>1929</fpage>
<lpage>1958</lpage>
</element-citation>
</ref>
<ref id="ref-27">
<label>Tome, Vanoni & Marcel (2014)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tome</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Vanoni</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Marcel</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>On the vulnerability of finger vein recognition to spoofing</article-title>
<year>2014</year>
<conf-name>2014 International Conference of the Biometrics Special Interest Group (BIOSIG)</conf-name>
<publisher-loc>Darmstadt</publisher-loc>
<publisher-name>IEEE</publisher-name>
<fpage>1</fpage>
<lpage>10</lpage>
</element-citation>
</ref>
<ref id="ref-28">
<label>Unar, Seng & Abbasi (2014)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Unar</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Seng</surname>
<given-names>WC</given-names>
</name>
<name>
<surname>Abbasi</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>A review of biometric technology along with trends and prospects</article-title>
<source>Pattern Recognition</source>
<year>2014</year>
<volume>47</volume>
<issue>8</issue>
<fpage>2673</fpage>
<lpage>2688</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2014.01.016</pub-id>
</element-citation>
</ref>
<ref id="ref-29">
<label>Vishi & Mavroeidis (2018)</label>
<element-citation publication-type="working-paper">
<person-group person-group-type="author">
<name>
<surname>Vishi</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Mavroeidis</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>An evaluation of score level fusion approaches for fingerprint and finger-vein biometrics</article-title>
<year>2018</year>
<uri xlink:href="http://arxiv.org/abs/1805.10666">http://arxiv.org/abs/1805.10666</uri>
</element-citation>
</ref>
<ref id="ref-30">
<label>Walia et al. (2019)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walia</surname>
<given-names>GS</given-names>
</name>
<name>
<surname>Rishi</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Asthana</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Robust secure multimodal biometric system based on diffused graphs and optimal score fusion</article-title>
<source>IET Biometrics</source>
<year>2019</year>
<volume>8</volume>
<issue>4</issue>
<fpage>231</fpage>
<lpage>242</lpage>
</element-citation>
</ref>
<ref id="ref-31">
<label>Yang et al. (2018)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Valli</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>A fingerprint and finger-vein based cancelable multi-biometric system</article-title>
<source>Pattern Recognition</source>
<year>2018</year>
<volume>78</volume>
<fpage>242</fpage>
<lpage>251</lpage>
<pub-id pub-id-type="doi">10.1016/j.patcog.2018.01.026</pub-id>
</element-citation>
</ref>
<ref id="ref-32">
<label>Yang & Zhang (2012)</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X</given-names>
</name>
</person-group>
<article-title>Feature-level fusion of fingerprint and finger-vein for personal identification</article-title>
<source>Pattern Recognition Letters</source>
<year>2012</year>
<volume>33</volume>
<issue>5</issue>
<fpage>623</fpage>
<lpage>628</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2011.11.002</pub-id>
</element-citation>
</ref>
<ref id="ref-33">
<label>Yin, Liu & Sun (2011)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yin</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
</person-group>
<article-title>SDUMLA-HMT: a multimodal biometric database</article-title>
<year>2011</year>
<conf-name>Chinese Conference on Biometric Recognition</conf-name>
<publisher-loc>Berlin, Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
<fpage>260</fpage>
<lpage>268</lpage>
</element-citation>
</ref>
<ref id="ref-34">
<label>Ying et al. (2017)</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ying</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Ren</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>A new image contrast enhancement algorithm using exposure fusion framework</article-title>
<year>2017</year>
<conf-name>International Conference on Computer Analysis of Images and Patterns</conf-name>
<publisher-loc>Cham</publisher-loc>
<publisher-name>Springer</publisher-name>
<fpage>36</fpage>
<lpage>46</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sante/explor/MaghrebDataLibMedV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 0002699 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 0002699 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sante
   |area=    MaghrebDataLibMedV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Wed Jun 30 18:27:05 2021. Site generation: Wed Jun 30 18:34:21 2021