Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Testing the reliability of hands and ears as biometrics: the importance of viewpoint

Identifieur interne : 000878 ( Pmc/Curation ); précédent : 000877; suivant : 000879

Testing the reliability of hands and ears as biometrics: the importance of viewpoint

Auteurs : Sarah V. Stevenage ; Catherine Walpole ; Greg J. Neil ; Sue M. Black [Royaume-Uni]

Source :

RBID : PMC:4624835

Abstract

Two experiments are presented to explore the limits when matching a sample to a suspect utilising the hand as a novel biometric. The results of Experiment 1 revealed that novice participants were able to match hands at above-chance levels as viewpoint changed. Notably, a moderate change in viewpoint had no notable effect, but a more substantial change in viewpoint affected performance significantly. Importantly, the impact of viewpoint when matching hands was smaller than that when matching ears in a control condition. This was consistent with the suggestion that the flexibility of the hand may have minimised the negative impact of a sub-optimal view. The results of Experiment 2 confirmed that training via a 10-min expert video was sufficient to reduce the impact of viewpoint in the most difficult case but not to remove it entirely. The implications of these results were discussed in terms of the theoretical importance of function when considering the canonical view and in terms of the applied value of the hand as a reliable biometric across viewing conditions.

Electronic supplementary material

The online version of this article (doi:10.1007/s00426-014-0625-x) contains supplementary material, which is available to authorized users.


Url:
DOI: 10.1007/s00426-014-0625-x
PubMed: 25410711
PubMed Central: 4624835

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4624835

Curation

No country items

Sarah V. Stevenage
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
Catherine Walpole
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
Greg J. Neil
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Testing the reliability of hands and ears as biometrics: the importance of viewpoint</title>
<author>
<name sortKey="Stevenage, Sarah V" sort="Stevenage, Sarah V" uniqKey="Stevenage S" first="Sarah V." last="Stevenage">Sarah V. Stevenage</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Walpole, Catherine" sort="Walpole, Catherine" uniqKey="Walpole C" first="Catherine" last="Walpole">Catherine Walpole</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Neil, Greg J" sort="Neil, Greg J" uniqKey="Neil G" first="Greg J." last="Neil">Greg J. Neil</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Black, Sue M" sort="Black, Sue M" uniqKey="Black S" first="Sue M." last="Black">Sue M. Black</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Centre for Anatomy and Human Identification, College of Art, Science and Engineering, University of Dundee, Dundee, DD1 4EH Scotland, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Centre for Anatomy and Human Identification, College of Art, Science and Engineering, University of Dundee, Dundee, DD1 4EH Scotland</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25410711</idno>
<idno type="pmc">4624835</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4624835</idno>
<idno type="RBID">PMC:4624835</idno>
<idno type="doi">10.1007/s00426-014-0625-x</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">000878</idno>
<idno type="wicri:Area/Pmc/Curation">000878</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Testing the reliability of hands and ears as biometrics: the importance of viewpoint</title>
<author>
<name sortKey="Stevenage, Sarah V" sort="Stevenage, Sarah V" uniqKey="Stevenage S" first="Sarah V." last="Stevenage">Sarah V. Stevenage</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Walpole, Catherine" sort="Walpole, Catherine" uniqKey="Walpole C" first="Catherine" last="Walpole">Catherine Walpole</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Neil, Greg J" sort="Neil, Greg J" uniqKey="Neil G" first="Greg J." last="Neil">Greg J. Neil</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</nlm:aff>
<wicri:noCountry code="subfield">Hampshire SO17 1BJ UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Black, Sue M" sort="Black, Sue M" uniqKey="Black S" first="Sue M." last="Black">Sue M. Black</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Centre for Anatomy and Human Identification, College of Art, Science and Engineering, University of Dundee, Dundee, DD1 4EH Scotland, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Centre for Anatomy and Human Identification, College of Art, Science and Engineering, University of Dundee, Dundee, DD1 4EH Scotland</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Psychological Research</title>
<idno type="ISSN">0340-0727</idno>
<idno type="eISSN">1430-2772</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Two experiments are presented to explore the limits when matching a sample to a suspect utilising the hand as a novel biometric. The results of Experiment 1 revealed that novice participants were able to match hands at above-chance levels as viewpoint changed. Notably, a moderate change in viewpoint had no notable effect, but a more substantial change in viewpoint affected performance significantly. Importantly, the impact of viewpoint when matching hands was smaller than that when matching ears in a control condition. This was consistent with the suggestion that the flexibility of the hand may have minimised the negative impact of a sub-optimal view. The results of Experiment 2 confirmed that training via a 10-min expert video was sufficient to reduce the impact of viewpoint in the most difficult case but not to remove it entirely. The implications of these results were discussed in terms of the theoretical importance of
<italic>function</italic>
when considering the canonical view and in terms of the applied value of the hand as a reliable biometric across viewing conditions.</p>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.1007/s00426-014-0625-x) contains supplementary material, which is available to authorized users.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Black, Sm" uniqKey="Black S">SM Black</name>
</author>
<author>
<name sortKey="Mallett, X" uniqKey="Mallett X">X Mallett</name>
</author>
<author>
<name sortKey="Rynn, C" uniqKey="Rynn C">C Rynn</name>
</author>
<author>
<name sortKey="Duffield, N" uniqKey="Duffield N">N Duffield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Black, Sm" uniqKey="Black S">SM Black</name>
</author>
<author>
<name sortKey="Macdonald Mcmillan, B" uniqKey="Macdonald Mcmillan B">B MacDonald-McMillan</name>
</author>
<author>
<name sortKey="Mallett, X" uniqKey="Mallett X">X Mallett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Black, S" uniqKey="Black S">S Black</name>
</author>
<author>
<name sortKey="Macdonald Mcmillan, B" uniqKey="Macdonald Mcmillan B">B MacDonald-McMillan</name>
</author>
<author>
<name sortKey="Mallett, X" uniqKey="Mallett X">X Mallett</name>
</author>
<author>
<name sortKey="Rynn, C" uniqKey="Rynn C">C Rynn</name>
</author>
<author>
<name sortKey="Jackson, G" uniqKey="Jackson G">G Jackson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blanz, V" uniqKey="Blanz V">V Blanz</name>
</author>
<author>
<name sortKey="Tarr, Mj" uniqKey="Tarr M">MJ Tarr</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brewer, N" uniqKey="Brewer N">N Brewer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bruce, V" uniqKey="Bruce V">V Bruce</name>
</author>
<author>
<name sortKey="Henderson, Z" uniqKey="Henderson Z">Z Henderson</name>
</author>
<author>
<name sortKey="Greenwood, K" uniqKey="Greenwood K">K Greenwood</name>
</author>
<author>
<name sortKey="Hancock, Pj" uniqKey="Hancock P">PJ Hancock</name>
</author>
<author>
<name sortKey="Burton, Am" uniqKey="Burton A">AM Burton</name>
</author>
<author>
<name sortKey="Miller, P" uniqKey="Miller P">P Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S Edelman</name>
</author>
<author>
<name sortKey="Tarr, Mj" uniqKey="Tarr M">MJ Tarr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cutzy, F" uniqKey="Cutzy F">F Cutzy</name>
</author>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S Edelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dror, Ie" uniqKey="Dror I">IE Dror</name>
</author>
<author>
<name sortKey="Charlton, D" uniqKey="Charlton D">D Charlton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dror, Ie" uniqKey="Dror I">IE Dror</name>
</author>
<author>
<name sortKey="Hampikian, G" uniqKey="Hampikian G">G Hampikian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S Edelman</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gomez, P" uniqKey="Gomez P">P Gomez</name>
</author>
<author>
<name sortKey="Shutter, J" uniqKey="Shutter J">J Shutter</name>
</author>
<author>
<name sortKey="Rouder, Jn" uniqKey="Rouder J">JN Rouder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jackson, G" uniqKey="Jackson G">G Jackson</name>
</author>
<author>
<name sortKey="Black, S" uniqKey="Black S">S Black</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Laeng, B" uniqKey="Laeng B">B Laeng</name>
</author>
<author>
<name sortKey="Rouw, R" uniqKey="Rouw R">R Rouw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Laeng, B" uniqKey="Laeng B">B Laeng</name>
</author>
<author>
<name sortKey="Carlesimo, Ga" uniqKey="Carlesimo G">GA Carlesimo</name>
</author>
<author>
<name sortKey="Caltagirone, C" uniqKey="Caltagirone C">C Caltagirone</name>
</author>
<author>
<name sortKey="Capasso, R" uniqKey="Capasso R">R Capasso</name>
</author>
<author>
<name sortKey="Miceli, G" uniqKey="Miceli G">G Miceli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakhaeizadeh, S" uniqKey="Nakhaeizadeh S">S Nakhaeizadeh</name>
</author>
<author>
<name sortKey="Dror, Ie" uniqKey="Dror I">IE Dror</name>
</author>
<author>
<name sortKey="Morgan, Rm" uniqKey="Morgan R">RM Morgan</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
<author>
<name sortKey="Brown, V" uniqKey="Brown V">V Brown</name>
</author>
<author>
<name sortKey="Findlay, Jm" uniqKey="Findlay J">JM Findlay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, S" uniqKey="Palmer S">S Palmer</name>
</author>
<author>
<name sortKey="Rosch, E" uniqKey="Rosch E">E Rosch</name>
</author>
<author>
<name sortKey="Chase, P" uniqKey="Chase P">P Chase</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perrett, Di" uniqKey="Perrett D">DI Perrett</name>
</author>
<author>
<name sortKey="Harries, Mh" uniqKey="Harries M">MH Harries</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Riddoch, Mj" uniqKey="Riddoch M">MJ Riddoch</name>
</author>
<author>
<name sortKey="Humphreys, Gw" uniqKey="Humphreys G">GW Humphreys</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Troje, N" uniqKey="Troje N">N Troje</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woods, At" uniqKey="Woods A">AT Woods</name>
</author>
<author>
<name sortKey="Moore, A" uniqKey="Moore A">A Moore</name>
</author>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yan, P" uniqKey="Yan P">P Yan</name>
</author>
<author>
<name sortKey="Bowyer, Kw" uniqKey="Bowyer K">KW Bowyer</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Psychol Res</journal-id>
<journal-id journal-id-type="iso-abbrev">Psychol Res</journal-id>
<journal-title-group>
<journal-title>Psychological Research</journal-title>
</journal-title-group>
<issn pub-type="ppub">0340-0727</issn>
<issn pub-type="epub">1430-2772</issn>
<publisher>
<publisher-name>Springer Berlin Heidelberg</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25410711</article-id>
<article-id pub-id-type="pmc">4624835</article-id>
<article-id pub-id-type="publisher-id">625</article-id>
<article-id pub-id-type="doi">10.1007/s00426-014-0625-x</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Testing the reliability of hands and ears as biometrics: the importance of viewpoint</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Stevenage</surname>
<given-names>Sarah V.</given-names>
</name>
<address>
<phone>02380 592973</phone>
<email>svs1@soton.ac.uk</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Walpole</surname>
<given-names>Catherine</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Neil</surname>
<given-names>Greg J.</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Black</surname>
<given-names>Sue M.</given-names>
</name>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Department of Psychology, University of Southampton, Highfield, Southampton, Hampshire SO17 1BJ UK</aff>
<aff id="Aff2">
<label></label>
Centre for Anatomy and Human Identification, College of Art, Science and Engineering, University of Dundee, Dundee, DD1 4EH Scotland, UK</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>20</day>
<month>11</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>20</day>
<month>11</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="ppub">
<year>2015</year>
</pub-date>
<volume>79</volume>
<issue>6</issue>
<fpage>989</fpage>
<lpage>999</lpage>
<history>
<date date-type="received">
<day>19</day>
<month>8</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>5</day>
<month>11</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2014</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<p>Two experiments are presented to explore the limits when matching a sample to a suspect utilising the hand as a novel biometric. The results of Experiment 1 revealed that novice participants were able to match hands at above-chance levels as viewpoint changed. Notably, a moderate change in viewpoint had no notable effect, but a more substantial change in viewpoint affected performance significantly. Importantly, the impact of viewpoint when matching hands was smaller than that when matching ears in a control condition. This was consistent with the suggestion that the flexibility of the hand may have minimised the negative impact of a sub-optimal view. The results of Experiment 2 confirmed that training via a 10-min expert video was sufficient to reduce the impact of viewpoint in the most difficult case but not to remove it entirely. The implications of these results were discussed in terms of the theoretical importance of
<italic>function</italic>
when considering the canonical view and in terms of the applied value of the hand as a reliable biometric across viewing conditions.</p>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.1007/s00426-014-0625-x) contains supplementary material, which is available to authorized users.</p>
</sec>
</abstract>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag Berlin Heidelberg 2015</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1">
<title>Introduction</title>
<p>Whilst criminals have learned to hide their face, or disguise their voice, their hands may nevertheless provide an important biometric within a court setting (Delac & Grgic,
<xref ref-type="bibr" rid="CR9">2004</xref>
). Indeed, the visibility and identification of unique cues within the hand, such as vein patterns and skin features (Black, Mallett, Rynn & Duffield,
<xref ref-type="bibr" rid="CR3">2009</xref>
; Black, MacDonald-McMillan & Mallett,
<xref ref-type="bibr" rid="CR1">2013</xref>
; Black, MacDonald-McMillan, Rynn & Jackson,
<xref ref-type="bibr" rid="CR2">2013</xref>
; Jackson & Black,
<xref ref-type="bibr" rid="CR15">2013</xref>
), have been sufficient to support a number of recent criminal convictions. Alongside this, however, the inherent flexibility of the hand means that it may be viewed from a variety of different viewpoints and in a variety of different positions, potentially compromising its biometric value. The purpose of the present paper is to investigate the limits of the hand as a biometric cue through exploring the ability of viewers to match images as viewpoint changes.</p>
<p>Key in this enquiry is the concept of the ‘canonical view’. In their seminal paper, Palmer, Rosch and Chase (
<xref ref-type="bibr" rid="CR21">1981</xref>
) found high agreement amongst participants in three tasks involving (1) rating the ‘goodness’ of an image of a familiar object, (2) forming a mental image of a familiar object and (3) selecting the best camera angle to take a photo of a familiar object. Importantly, high agreement resulted whether participants judged a limited set of views presented to them (Palmer et al.,
<xref ref-type="bibr" rid="CR21">1981</xref>
), or generated their own views through unconstrained rotation of familiar objects in a real-time 3D virtual space (Blanz, Tarr & Bülthoff,
<xref ref-type="bibr" rid="CR4">1999</xref>
). The consistently preferred image was termed the ‘canonical view’ and Palmer et al. suggested that it provided a ‘privileged perspective’. Perhaps most importantly, Palmer et al. noted that the canonical view elicited faster responses in an object naming task (see also Bülthoff, Edelman & Tarr,
<xref ref-type="bibr" rid="CR7">1995</xref>
) and in a visual search task (Newell, Brown & Findlay,
<xref ref-type="bibr" rid="CR20">2004</xref>
). Moreover, Gomez, Shutter and Rouder (
<xref ref-type="bibr" rid="CR14">2008</xref>
, Expt 2) demonstrated benefit of presenting the canonical image during a free-recall task extending the importance of canonicality from perceptual- to memory-based tasks. Indeed, when asked to recall the names of 171 objects encountered in a study list, participants were able to recall significantly more objects when studied from canonical images (41 %) than when studied from non-canonical images (33 %).
<xref ref-type="fn" rid="Fn1">1</xref>
When taken together, these studies implied a performance advantage when viewing canonical images, but a performance cost otherwise. Consequently, if a canonical view was also demonstrated for hands, then their reliability as a biometric may be thrown into question in situations in which the viewing conditions deviated from the canonical ideal.</p>
<sec id="Sec2">
<title>Attributes of the canonical image</title>
<p>Blanz et al., (
<xref ref-type="bibr" rid="CR4">1999</xref>
) considered the attributes required to define a view as canonical. Three main characteristics were highlighted:
<list list-type="order">
<list-item>
<p>
<italic>Goodness of recognition,</italic>
through representing distinctive object characteristics and minimising occlusion,</p>
</list-item>
<list-item>
<p>
<italic>Familiarity</italic>
, through frequency of exposure, and</p>
</list-item>
<list-item>
<p>
<italic>Display of object functionality</italic>
through reflecting a characteristic mode of interaction.</p>
</list-item>
</list>
</p>
<p>For a novel object, the preferred or canonical view could only be based on the first of Blanz et al.’s criteria. Thus, a canonical view (if one existed) reflected only geometric aspects of the image itself, and agreement amongst viewers on the canonical view tended to be relatively low (see Cutzy & Edelman,
<xref ref-type="bibr" rid="CR8">1994</xref>
; Edelman & Bülthoff,
<xref ref-type="bibr" rid="CR12">2002</xref>
; Perrett & Harries,
<xref ref-type="bibr" rid="CR22">1988</xref>
). In contrast, for a familiar object, the canonical view could additionally be informed by
<italic>experience</italic>
(frequency of exposure to different viewpoints) and
<italic>understanding</italic>
(appreciation of function), and this tended to result in a greater consensus regarding the canonical view.</p>
<p>Laeng and Rouw (
<xref ref-type="bibr" rid="CR17">2001</xref>
) offered support to suggest that the cardinal defining characteristic of the canonical view was its ‘frequency of exposure’. They reported that, whilst the canonical view of a familiar face was best represented by a ¾ profile (see also Troje & Bülthoff,
<xref ref-type="bibr" rid="CR24">1996</xref>
), the canonical view of one’s own face was closer to the frontal image, this being the view most frequently seen. However, it may be premature to define frequency of exposure as the most important aspect of canonicality. Indeed, the perspective from which we most often see an object may be inherently linked to the function that the object fulfils (the last of Blanz et al.’s criteria), and herein lies the basis for predictions for the current paper.</p>
</sec>
<sec id="Sec3">
<title>The present study</title>
<p>Given the aim of exploring whether the hand, as a biometric, could be processed accurately across different views, the central question for the current paper was whether a canonical view existed for hands. If so, performance was expected to be optimal when presented with this canonical view, and was expected to be impaired when presented with a non-canonical view. This would be a damaging result when evaluating the hand as a biometric, as it would suggest that the processing of the hand would only be reliable under limited conditions. However, with canonicality potentially influenced by both frequency of exposure and object function, it may be anticipated that a flexible object such as a hand may frequently be observed from a variety of viewpoints and in a variety of positions as it carries out a range of functions (see Laeng, Carlesimo, Caltagirone, Capasso & Miceli,
<xref ref-type="bibr" rid="CR16">2002</xref>
). As such, it may be predicted that hands may not have as strong a preference for a single canonical view, and consequently may survive presentation across a range of views, compared to a more rigid object. To test this prediction, the processing of hand images was compared here to the processing of ear images. Both represent valuable biometric cues (see Yan & Boywer,
<xref ref-type="bibr" rid="CR26">2007</xref>
for a review of ear recognition, and Black et al,
<xref ref-type="bibr" rid="CR3">2009</xref>
for a review of hand recognition). However, the hand has a greater degree of flexibility and multifunctionality compared to the ear.</p>
<p>Performance was explored in a lab-based task designed to be analogous to that within a criminal investigation. Specifically, a traditional simultaneous matching task was used in which participants were asked to find the image (from 10 possibilities) that matched a target image. Given the preceding discussion, it was expected that both hand and ear processing may show sensitivity to a change in viewpoint, with optimal performance being associated with more optimal images. However, it was also expected that hands would be less affected by a change in viewpoint compared to ears because the non-rigidity of the hand provides for greater functionality and in turn, exposure to a larger array of viewpoints. As such, the present study is grounded in the predictions of canonicality across rigid and non-rigid cues, but provides an important test of the limits of the hand as a forensic biometric.</p>
</sec>
</sec>
<sec id="Sec4">
<title>Experiment 1: method</title>
<sec id="Sec5">
<title>Design</title>
<p>A 2 × 3 mixed design was used in which stimulus type (hands or ears) was varied between participants, and viewpoint (good, medium and poor) was varied within participants. Performance was tested by means of a ‘1 in 10’ task (Bruce et al.,
<xref ref-type="bibr" rid="CR6">1999</xref>
) in which the participants’ task was to select one image (from an array of 10) that matched a target. Accuracy of performance was recorded.</p>
</sec>
<sec id="Sec6">
<title>Participants</title>
<p>A total of 50 novice participants (35 females, 15 males) took part either on a volunteer basis or in return for course credit. Participants were randomly assigned to study either hands (
<italic>n</italic>
 = 25, 18 females) or ears (
<italic>n</italic>
 = 25, 17 females), and both the age range (
<italic>t</italic>
<sub>(48)</sub>
 = 1.18,
<italic>ns</italic>
) and gender split (
<inline-formula id="IEq1">
<alternatives>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\chi_{(1)}^{2}$$\end{document}</tex-math>
<mml:math id="M2">
<mml:msubsup>
<mml:mi mathvariant="italic">χ</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:math>
<inline-graphic xlink:href="426_2014_625_Article_IEq1.gif"></inline-graphic>
</alternatives>
</inline-formula>
 < 1,
<italic>ns</italic>
) were matched across the two groups. In addition, one hand expert and one ear expert provided baseline data for comparison purposes. Each gained their expertise through academic experience within the field of anatomy, with specialisation in the area of hands or ears to assist UK investigative processes either through the preparation of court evidence, or through facial reconstruction, respectively.</p>
<p>All participants reported normal, or corrected-to-normal, vision and did not recognise any individuals from either their hands or ears.</p>
</sec>
<sec id="Sec7">
<title>Materials</title>
<sec id="Sec8">
<title>Hand images</title>
<p>A bespoke set of stimuli was gathered from 42 individuals (20 females, 22 males) to provide two images of each of six viewpoints of the hand. The two images differed only in the direction of the light source, and hence in the pattern of shadows. Their collection ensured that the matching task involved two different images of the same hand. Consequently, reliance on simple picture-related cues in the matching task was minimised. The six viewpoints captured (1) the dorsal (back) surface of the hand laid flat, (2) the palmar surface of the hand laid flat, (3) the hand in a relaxed pose, (4) the hand viewed from above whilst holding a glass (5) the hand viewed from above whilst holding a pen, and finally (6) the hand viewed from above whilst holding a mobile phone. These six viewpoints were selected to capture a range of hand positions reflecting forensic ideals (dorsal and palmar views) and functional utility (grasping, writing, texting).</p>
<p>From this set, the images associated with 30 individuals were selected on the basis of a lack of distinguishing features such as pigmentation irregularities, tattoos, cuts or abrasions, nail irregularities, or significant levels of visible hair on wrists or knuckles. All individuals were photographed without jewellery and nail varnish.</p>
</sec>
<sec id="Sec9">
<title>Ear images</title>
<p>Ear images were obtained from the facial photographs of 116 individuals represented in the SuperIdentity Stimulus Database. The ears were extracted from full head images using Corel Photoshop such that the full extent of the ear was visible whilst minimising the amount of hair within the image. In this way, two ear images were extracted (for the reasons stated above) for each of six viewpoints capturing (1) the ear from the side, (2) the ear from a ¾ profile, and (3) the ear from the front as viewed both from a horizontal (0°) perspective and from a +20° perspective looking down. Again, these viewpoints were selected to reflect those available in optimal forensic contexts (mug-shots) and in more ecologically valid contexts such as from a closed-circuit television (CCTV) image where a camera is typically mounted above head height looking down.</p>
<p>From the set of images available, 30 individuals were selected to minimise visible head hair, and other distinguishing features such as lobe or helix irregularities, or multiple piercings. Again, all individuals were photographed without jewellery.</p>
<p>Both sets of stimuli were photographed using a Nikon D200 SLR camera under controlled artificial light conditions. The hands were photographed resting on a matt black horizontal surface, from a distance of approximately 45 cm. The (heads and) ears were photographed against an 18 % grey background from a distance of 1 m.</p>
</sec>
<sec id="Sec10">
<title>Determination of viewpoint quality</title>
<p>To determine the quality of the viewpoints, a crowdsourcing technique (Mturk) was used in which 100 individuals were shown the 6 viewpoints for a single hand, and the 6 viewpoints for a single ear. In line with Palmer et al, (
<xref ref-type="bibr" rid="CR21">1981</xref>
), their task was to select the image that best corresponded to the mental image that they formed in their mind’s eye when imagining a hand or an ear. For both hands and ears, the most popular viewpoint was nominated as the optimal or ‘good’ viewpoint. This was chosen by a minimum of 40 % of the individuals. Similarly, the viewpoint of intermediate popularity was nominated as the ‘medium’ viewpoint and this was chosen by approximately 20 % of the individuals. Finally, the viewpoint that was least popular was nominated as the non-optimal or ‘poor’ viewpoint, and was selected by less than 5 % of the individuals. Care was taken to balance the popularity of corresponding nominations across the hands and ears as far as possible. The resulting nominated viewpoints, and their level of popularity amongst the 100 individuals, are summarised in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
and were used in the subsequent experimentation.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>Example images depicting good, ‘medium and poor viewpoints for hands and for ears, together with their level of popularity (endorsement) across 100 individuals</p>
</caption>
<graphic xlink:href="426_2014_625_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
</sec>
</sec>
<sec id="Sec11">
<title>Procedure</title>
<p>Across the experiment, participants completed 30 ‘1 in 10’ matching trials in which their task was to decide which, of a set of 10 images, matched the single target displayed simultaneously at the top of the computer screen. As such, this was a perceptual-matching task with no memory component and no naming requirement. All trials were ‘target present’ trials, however, the target image at the top of the screen and the image within the array were always two different images (even if in the same viewpoint) to prevent simple picture matching.</p>
<p>The format of each trial was identical and consisted of the presentation of the target at the top of the screen, with the array of 10 images, in three rows of 4 (top), 3 (middle) and 3 (bottom), simultaneously displayed beneath it. Above each image in the array was a number to denote its position within the array, with positions 1–4 referring to locations from left to right on the top row, positions 5–7 referring to locations from left to right on the middle row, and positions 8–0 referring to locations from left to right on the bottom row (see Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
).
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Example array for hands, with the target image depicted at the top of the display, and the 10 test images presented below. The target image was always depicted from the good viewpoint, whilst the test images were all depicted from either the good, medium or poor viewpoint. The target was always present amongst the test images but was always a different image. Here, the target is in position 8</p>
</caption>
<graphic xlink:href="426_2014_625_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>The target image was always presented in the good viewpoint, analogous to the optimal image of a ‘suspect’s hand’ within an investigation. The array of 10 images all showed stimuli in either good, medium, or poor viewpoints with 10 trials for each viewpoint. These were blocked according to viewpoint. The order of these blocks, and the selection of individual target exemplars presented within each block, was counterbalanced across participants, to minimise the influences of fatigue and item effects within the study.</p>
<p>The participant’s task was to respond as quickly but as accurately as possible to indicate which of the 10 images in the array depicted the target at the top of the screen. Participants were aware that the image of the target in the array would be different and thus they were looking for a different image of the same hand (or ear) rather than an identical image. Participants indicated their answer by pressing the numbered key (0–9) on a standard keyboard that corresponded to the position of their selected image in the array, and all images remained visible until this response was made. Self-paced breaks separated the three blocks of trials and the entire experiment lasted approximately 30–40 min, after which participants were thanked and debriefed.</p>
</sec>
</sec>
<sec id="Sec12">
<title>Experiment 1: results and discussion</title>
<p>Accuracy on the ‘1 in 10’ task is summarised in Table 
<xref rid="Tab1" ref-type="table">1</xref>
and was explored to determine whether novice performance on the matching task (1) was better than chance, (2) approached the level of the experts and (3) differed across viewpoint.
<table-wrap id="Tab1">
<label>Table 1</label>
<caption>
<p>Absolute and standardised accuracy of performance (and standard deviation) on the ‘1 in 10’ matching task for experts, novices (experiment 1) and trained participants (experiment 2)</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"></th>
<th align="left">Good image</th>
<th align="left" colspan="2">Medium image</th>
<th align="left">Poor image</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" colspan="5">Hand recognition accuracy</td>
</tr>
<tr>
<td align="left"> Expert (absolute)</td>
<td align="left">1.00</td>
<td align="left">0.90</td>
<td align="left" colspan="2">0.33</td>
</tr>
<tr>
<td align="left"> Novice (absolute)</td>
<td align="left">0.52 (0.16)</td>
<td align="left">0.45 (0.15)</td>
<td align="left" colspan="2">0.23 (0.08)</td>
</tr>
<tr>
<td align="left"> Trained (absolute)</td>
<td align="left">0.53 (0.16)</td>
<td align="left">0.44 (0.14)</td>
<td align="left" colspan="2">0.21 (0.06)</td>
</tr>
<tr>
<td align="left"> Novice (standardised)</td>
<td align="left">1.00 (0)</td>
<td align="left">0.89 (0.26)</td>
<td align="left" colspan="2">0.50 (0.26)</td>
</tr>
<tr>
<td align="left"> Trained (standardised)</td>
<td align="left">1.00 (0)</td>
<td align="left">0.88 (0.32)</td>
<td align="left" colspan="2">0.43 (0.18)</td>
</tr>
<tr>
<td align="left" colspan="5">Ear recognition accuracy</td>
</tr>
<tr>
<td align="left"> Expert (absolute)</td>
<td align="left">0.87</td>
<td align="left">0.70</td>
<td align="left" colspan="2">0.40</td>
</tr>
<tr>
<td align="left"> Novice (absolute)</td>
<td align="left">0.63 (0.15)</td>
<td align="left">0.27 (0.11)</td>
<td align="left" colspan="2">0.17 (0.06)</td>
</tr>
<tr>
<td align="left"> Trained (absolute)</td>
<td align="left">0.54 (0.18)</td>
<td align="left">0.32 (0.10)</td>
<td align="left" colspan="2">0.19 (0.09)</td>
</tr>
<tr>
<td align="left"> Novice (standardised)</td>
<td align="left">1.00 (0)</td>
<td align="left">0.44 (0.19)</td>
<td align="left" colspan="2">0.29 (0.15)</td>
</tr>
<tr>
<td align="left"> Trained (standardised)</td>
<td align="left">1.00 (0)</td>
<td align="left">0.65 (0.24)</td>
<td align="left" colspan="2">0.39 (0.21)</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
<sec id="Sec13">
<title>Comparison to chance</title>
<p>To address the first question, a series of one-sample
<italic>t</italic>
tests was conducted comparing accuracy to a chance level of 0.1. These indicated that for both hands and ears, and across every viewpoint, novice participants were significantly better than chance (all
<italic>ts</italic>
<sub>(24)</sub>
 > 5.93,
<italic>p</italic>
 < 0.001). This was important in demonstrating the absence of floor effects within the data despite the very different nature of the hand and ear stimuli.</p>
</sec>
<sec id="Sec14">
<title>Comparison to experts</title>
<p>To address the second question, one-sample
<italic>t</italic>
tests were conducted to compare the absolute performance of participants to that of the relevant expert at each viewpoint. As might be anticipated, these revealed that, whilst the novice participants performed at above chance levels, they performed below the level of the expert in all conditions (all
<italic>t</italic>
s
<sub>(24)</sub>
 > 7.63,
<italic>p</italic>
 < 0.001).</p>
</sec>
<sec id="Sec15">
<title>Impact of viewpoint</title>
<p>To address the final question, a 2 × 3 mixed Analysis of Variance (ANOVA) was conducted to explore accuracy of performance when matching hands and ears across good, medium and poor viewpoints. For this analysis, accuracy levels were standardised by expressing them as a proportion of the performance level attained in the optimal (good) condition (see Table 
<xref rid="Tab1" ref-type="table">1</xref>
). This ensured a focus on the relative impact of a
<italic>change</italic>
in viewpoint, and prevented the findings being affected by variation in absolute levels of performance across the stimuli.</p>
<p>The ANOVA revealed a main effect of stimulus type (
<italic>F</italic>
<sub>(1, 48)</sub>
 = 41.59,
<italic>p</italic>
 < 0.001, partial η
<sup>2</sup>
 = .464), with better overall performance for hands than for ears. In addition, a main effect of viewpoint emerged (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 409.52,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.895), with better performance when presented with more optimal viewpoints. These effects were qualified by the expected interaction between stimulus type and viewpoint (
<italic>F</italic>
<sub>(2,96)</sub>
 = 24.79,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.34).</p>
<p>Analysis of the simple main effects confirmed a significant effect of viewpoint for both hands (
<italic>F</italic>
<sub>(2,48)</sub>
 = 47.27,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.66) and ears (
<italic>F</italic>
<sub>(2,48)</sub>
 = 233.48,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.907) suggesting that the performance for both stimulus types suffered as the view became less optimal. However, a series of Bonferroni-corrected comparisons confirmed that performance with hands was not affected by a change from good to medium images (
<italic>t</italic>
<sub>(24)</sub>
 = 2.04,
<italic>p</italic>
 > 0.05) but was only affected by a change from medium to poor images (
<italic>t</italic>
<sub>(24)</sub>
 = 6.72,
<italic>p</italic>
 < 0.001). In contrast, performance with ears was affected as soon as the image moved away from optimal, with significant differences in performance levels between good and medium images (
<italic>t</italic>
<sub>(24)</sub>
 = 14.92,
<italic>p</italic>
 < 0.001) as well as between the medium and poor images (
<italic>t</italic>
<sub>(24)</sub>
 = 4.16,
<italic>p</italic>
 < 0.001).</p>
<p>In accounting for these results, it was possible that ear processing was more affected by a change in viewpoint than hand processing because ear processing was an inherently difficult task. Important in this regard was the demonstration of equivalent absolute levels of performance in the best image case (
<italic>t</italic>
<sub>(48)</sub>
 = 2.48,
<italic>ns</italic>
) despite the differences between hands and ears as stimuli. Consequently, the substantial impact of viewpoint for ears could not easily be attributed to an inherent difficulty when matching ears. However, the possibility remained that the difficulty when matching ears was revealed not in baseline performance levels, but in a greater vulnerability as the image quality was changed. Such an explanation was compatible with the predictions for this study in which the flexibility of the hand was expected to minimise the impact of a sub-optimal viewpoint. Indeed, these two accounts would be difficult to separate out.</p>
<p>Taking all analyses together, the results of Experiment 1 provided support for the predictions. Specifically, the change in viewpoint had a significant effect when matching hands, but had a greater effect, from an equivalent starting point, when matching ears. These results supported the prediction that the inherent flexibility of the hand-enabled exposure to a variety of viewpoints with the consequence that canonicality was less strong for hands than ears.</p>
<p>In terms of implications for the hand as a biometric, the data here led to the conclusion that when matching hands, performance could survive moderate changes in viewpoint whereas when matching other more rigid biometrics (such as ears), a change in viewpoint compromised performance quite substantially. As such, these data confirmed a greater reliability of the hand as a biometric cue across optimal and moderately optimal viewing conditions.</p>
<p>Several aspects of the current results were interesting and unanticipated, and as such warrant some consideration. In particular, it was interesting to note impairment in the performance of the two experts as viewpoint changed. Whilst it was not possible to assess the extent of the impact of viewpoint statistically for each of the experts (there being only one expert for each stimulus type), it was possible to determine whether the experts were affected to the same degree as the novice participants.</p>
<p>To this end, a series of one-sample
<italic>t</italic>
tests was conducted, comparing the decline in performance shown by the expert, to the decline in performance shown by the group of novices. This confirmed that novice performance declined more than expert performance as the viewpoint became less optimal. This was evident when matching ears as the image changed from good to medium (ears:
<italic>t</italic>
<sub>(24)</sub>
 = 6.25,
<italic>p</italic>
 < 0.001; hands:
<italic>t</italic>
<sub>(24)</sub>
 = 1.08,
<italic>ns</italic>
), and when matching both ears and hands as the image changed from medium to poor (ears:
<italic>t</italic>
<sub>(24)</sub>
 = 8.64,
<italic>p</italic>
 < 0.001; hands:
<italic>t</italic>
<sub>(24)</sub>
 = 11.23,
<italic>p</italic>
 < 0.001). Consequently, these results suggested that whilst the experts were affected by a change in viewpoint, they were affected less than novices.</p>
<p>This latter analysis did not sit within the main purpose of this Experiment but nevertheless raised questions: For example, could the provision of training be sufficient to improve performance levels from that of the novice towards that of the expert. Relatedly, could the provision of training ameliorate the negative impact of the sub-optimal viewpoint so that trained participants come to show greater resilience than novices when presented with sub-optimal viewpoints?</p>
<p>Whilst representing an important applied issue, such questions relate well to the theoretical consideration of Blanz et al., (
<xref ref-type="bibr" rid="CR4">1999</xref>
) regarding the criteria underpinning a canonical view. Indeed, it may be argued that expertise brings with it a capacity to use a range of cues so that the matching task can still be completed even when a subset of the cues is unavailable through occlusion in a sub-optimal image. Similarly, it may be argued that expertise brings the capacity to show better understanding of function, and greater levels of exposure to non-standard viewpoints through expert study. All factors may lead to the prediction that canonicality is less strong (or the negative impact of a non-canonical image can more easily be overcome) when the viewer brings expertise to their viewing task.</p>
<p>Experiment 2 was conducted to present an examination of these emergent questions. Through the provision of video instruction, the performance of a group of ‘trained’ participants was compared to that of the novices and experts studied in Experiment 1. It was anticipated that training would improve overall levels of performance, and would reduce the impact of a change in viewpoint compared to the novices such that the performance of the trained group would more closely resemble that of the experts.</p>
</sec>
</sec>
<sec id="Sec16">
<title>Experiment 2: method</title>
<sec id="Sec17">
<title>Design</title>
<p>The design was identical to that used in Experiment 1 except that training was provided via a short video prior to completing the ‘1 in 10’ trials. Accuracy on the matching task remained as the measure of performance.</p>
</sec>
<sec id="Sec18">
<title>Participants</title>
<p>A total of 50 trained participants took part in return for a small monetary reward. Participants were randomly assigned to study either hands (
<italic>n</italic>
 = 25, 16 females) or ears (
<italic>n</italic>
 = 25, 14 females), and both the age range (
<italic>t</italic>
<sub>(48)</sub>
 < 1,
<italic>ns</italic>
) and gender split (
<inline-formula id="IEq2">
<alternatives>
<tex-math id="M3">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\chi_{(1)}^{2}$$\end{document}</tex-math>
<mml:math id="M4">
<mml:msubsup>
<mml:mi mathvariant="italic">χ</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:math>
<inline-graphic xlink:href="426_2014_625_Article_IEq2.gif"></inline-graphic>
</alternatives>
</inline-formula>
 < 1,
<italic>ns</italic>
) were matched as before across the two groups.</p>
</sec>
<sec id="Sec19">
<title>Materials</title>
<p>The ‘1 in 10’ materials were identical to those used in Experiment 1. In addition, however, two training videos were prepared. The videos lasted 12 min (hand training) and 11 min (ear training), and provided foundational input on the anatomy of the hand or ear, and the diagnostic features that would be examined by a forensic expert to determine a match between one sample and another for court purposes.</p>
</sec>
<sec id="Sec20">
<title>Procedure</title>
<p>The procedure was identical to that in Experiment 1 with the exception that participants received video training on how to examine either hands or ears depending on the condition to which they had been assigned. The completion of the ‘1 in 10’ trials followed this training, and the entire task lasted up to 45 min, after which participants were thanked and debriefed.</p>
</sec>
</sec>
<sec id="Sec21">
<title>Experiment 2: results and discussion</title>
<p>Analysis within Experiment 2 took the same format as in Experiment 1 and results are summarised in Table 
<xref rid="Tab1" ref-type="table">1</xref>
. Performance in the ‘1 in 10’ task was examined to see whether it (1) was better than chance, (2) approached the level of the experts, and (3) differed across viewpoint.</p>
<sec id="Sec22">
<title>Comparison to chance</title>
<p>In terms of absolute performance levels, a series of one-sample
<italic>t</italic>
tests confirmed that performance for both hand and ear recognition across every viewpoint exceeded the chance level of 0.1 (all
<italic>ts</italic>
<sub>(24)</sub>
 > 5.48,
<italic>p</italic>
 < 0.001). This again demonstrated that there were no floor effects within the data.</p>
</sec>
<sec id="Sec23">
<title>Comparison to experts</title>
<p>It was also evident that, whilst absolute levels of performance showed some improvement from novice levels, one-sample
<italic>t</italic>
tests still confirmed that the trained participants performed at a level below the experts in every condition (all
<italic>t</italic>
s
<sub>(24)</sub>
 > 9.37,
<italic>p</italic>
 < 0.001). This may have reflected a lack of practice in the task itself despite training, as well as those ‘hard-to-articulate’ elements of expertise that the training video could not easily provide.</p>
</sec>
<sec id="Sec023">
<title>Impact of Viewpoint</title>
<p>To explore the impact of viewpoint for the trained participants only, a 2 × 3 mixed ANOVA was conducted to examine the impact of stimulus (hand, ear) and viewpoint (good, medium, poor) on accuracy of performance. As in Experiment 1, this analysis was conducted using the standardised accuracy scores so that the relative impact of a
<italic>change</italic>
in viewpoint remained the focus. The results mirrored those from Experiment 1 in all respects. Specifically, a main effect of stimulus type emerged (
<italic>F</italic>
<sub>(1, 48)</sub>
 = 4.83,
<italic>p</italic>
 < 0.05, partial
<italic>η</italic>
<sup>2</sup>
 = 0.091) with performance being better for hands than for ears. In addition, a main effect of viewpoint emerged (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 160.10,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.769) with better performance when presented with more optimal viewpoints. Finally, these effects were qualified by a significant interaction between stimulus type and viewpoint (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 7.14,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.129).</p>
<p>Analysis of the simple main effects revealed a significant impact of viewpoint both when matching hands (
<italic>F</italic>
<sub>(2, 48)</sub>
 = 69.50,
<italic>p</italic>
 < 0.001, partial η
<sup>2</sup>
 = 0.743) and when matching ears (
<italic>F</italic>
<sub>(2, 48)</sub>
 = 104.19,
<italic>p</italic>
 < 0.001, partial η
<sup>2</sup>
 = 0.813). Moreover, as in Experiment 1, Bonferroni-corrected comparisons confirmed that performance with hands was not affected by a change from good to medium images (
<italic>t</italic>
<sub>(24)</sub>
 = 1.88,
<italic>p</italic>
 > 0.05) but was affected by a change from medium to poor images (
<italic>t</italic>
<sub>(24)</sub>
 = 9.18,
<italic>p</italic>
 < 0.001). In contrast, ear matching was impaired both when the images changed from good to medium (
<italic>t</italic>
<sub>(24)</sub>
 = 7.47,
<italic>p</italic>
 < 0.001) and when the images changed from medium to poor (
<italic>t</italic>
<sub>(24)</sub>
 = 6.98,
<italic>p</italic>
 < 0.001).</p>
</sec>
<sec id="Sec24">
<title>Impact of training</title>
<p>Of most interest within the results was the question of whether training would improve performance in the matching task, and would ameliorate the effects of viewpoint noted in Experiment 1. To address this question, a 2 × 2 × 3 mixed ANOVA was performed on the standardised accuracy scores across Experiments 1 and 2, enabling examination of the effects of training (novice, trained), stimulus type (hands, ears), and viewpoint (good, medium, poor). The presence of the expected three-way interaction between all factors (
<italic>F</italic>
<sub>(2, 192)</sub>
 = 3.17,
<italic>p</italic>
 < 0.01, partial
<italic>η</italic>
<sup>2</sup>
 = 0.032) justified further exploration of the predictions through separate analyses for each stimulus type.</p>
</sec>
<sec id="Sec25">
<title>Performance with hands</title>
<p>A 2 × 3 ANOVA was conducted to explore the effect of training (novice, trained) and viewpoint (good, medium, poor) when matching hands. Given that the expert showed an impairment as viewpoint became poorer, it was anticipated that the moderate effect of viewpoint revealed with novice participants in Experiment 1 may remain despite the training provided in Experiment 2. However, it was hoped that the magnitude of this effect may have reduced with training. In partial support of this expectation, the ANOVA revealed a significant effect of viewpoint (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 114.82,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.705). However, there was no significant effect of training (
<italic>F</italic>
<sub>(1, 48)</sub>
 < 1,
<italic>ns</italic>
). Unsurprisingly, therefore, no interaction emerged, confirming that the influence of viewpoint was not reduced by training (
<italic>F</italic>
<sub>(2, 96)</sub>
 < 1,
<italic>ns</italic>
). Indeed, both the novice and trained groups showed the same pattern of performance, with ability remaining stable as the image quality reduced from good to medium (novice:
<italic>t</italic>
<sub>(24)</sub>
 = 2.04,
<italic>ns;</italic>
trained:
<italic>t</italic>
<sub>(24)</sub>
 = 1.88,
<italic>ns</italic>
), but showing a decline as the image quality reduced further from medium to poor (novice:
<italic>t</italic>
<sub>(24)</sub>
 = 6.72,
<italic>p</italic>
 < 0.001, trained:
<italic>t</italic>
<sub>(24)</sub>
 = 9.18,
<italic>p</italic>
 < 0.001).</p>
</sec>
<sec id="Sec26">
<title>Performance with ears</title>
<p>A 2 × 3 ANOVA was conducted as above to explore the effect of training (novice, trained) and viewpoint (good, medium, poor) when matching ears. As above, it was anticipated that the effect of viewpoint noted with novices in Experiment 1 would remain, but that its magnitude may be reduced with training. Again, the ANOVA revealed the expected main effect of viewpoint (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 305.43,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.86), confirming increasingly impaired performance as the image became poorer. In addition, and in contrast to the results described above, the main effect of training reached significance (
<italic>F</italic>
<sub>(1, 48)</sub>
 = 9.85,
<italic>p</italic>
 < 0.005, partial
<italic>η</italic>
<sup>2</sup>
 = 0.17) suggesting that participants performed significantly better with training than without. This was gratifying to see as it confirmed the value of the training video for the participants working with the most vulnerable stimulus set. Most importantly, however, the anticipated interaction between training and viewpoint reached significance (
<italic>F</italic>
<sub>(2, 96)</sub>
 = 7.23,
<italic>p</italic>
 < 0.001, partial
<italic>η</italic>
<sup>2</sup>
 = 0.131).</p>
<p>Post hoc contrasts confirmed that performance fell significantly for both novice and trained groups as the image quality fell from good to medium (novice:
<italic>t</italic>
<sub>(24)</sub>
 = 14.92,
<italic>p</italic>
 < 0.001; trained:
<italic>t</italic>
<sub>(24)</sub>
 = 7.47,
<italic>p</italic>
 < 0.001), and as it fell further from medium to poor (novice:
<italic>t</italic>
<sub>(24)</sub>
 = 4.16,
<italic>p</italic>
 < 0.001; trained:
<italic>t</italic>
<sub>(24)</sub>
 = 6.98,
<italic>p</italic>
 < 0.001). However, the performance of the trained group was affected less (35 %) than that of the novice group (56 %) as the image quality reduced from good to medium (
<italic>t</italic>
<sub>(48)</sub>
 = 3.45,
<italic>p</italic>
 < 0.001). Consequently, and in line with predictions, the data confirmed that training significantly minimised the negative impact of the sub-optimal image.</p>
<p>Experiment 2 was conducted to determine whether training through simple instruction would increase performance levels from those displayed by the novices in Experiment 1, and would accordingly reduce the impact of a sub-optimal viewpoint. The results in this regard are equivocal. Training only had a significant effect on performance levels when matching ears. As a consequence, these trained participants did indeed show less impact of the sub-optimal viewpoint compared to the novice participants in Experiment 1. In this regard, training achieved its predicted purpose, whilst not raising performance levels up to those of the expert and whilst not removing the viewpoint effect altogether.</p>
<p>In contrast, and somewhat disappointingly, training had no significant effect on performance when matching hands. Consequently, it was unsurprising that that the viewpoint effects noted with novices in Experiment 1 remained evident for trained participants in Experiment 2. Notwithstanding this, it is worth noting that when matching hands, both novices and trained participants showed no significant decline in performance as the image quality fell from good to medium, and only showed a significant decline in performance as the image quality fell to an unacceptably poor level.</p>
<p>In reflecting on the lack of effectiveness of the hand training video, we can find no clear and satisfactory explanation. We considered, for example, the possibility that the video training was ineffective because it was unable to capture those heuristic expert strategies that may elude conscious awareness or clear articulation. This, by definition, remains likely, although it is difficult to see how this might apply to the hand training video but not to the ear training video. Hence, this remains unsatisfactory as an explanation of the current pattern of results.</p>
<p>We considered, also, the possibility that the training video for hands merely formalised the approach that the novices intuitively used and thus provided no additional benefit. Indeed, the demonstration of stable performance across novices and trained participants even in the best of viewpoint conditions might lend weight to this as an explanation. Our review of the video training suggests that, whilst possible, this may be unlikely as an explanation. The hand training concentrated on noticing the existence of one hand characteristic
<italic>relative</italic>
to another (i.e., the position of skin features relative to morphological characteristics such as knuckle creases). In comparison, the novice hand participants in Experiment 1 tended to comment on isolated hand features only. Consequently, whilst possible, it seems unlikely that ineffectiveness of the training video was due to it merely formalising the intuitive strategies of the hand novices.</p>
<p>What was clear, however, was that the participants in the hand-matching task performed at an equivalent level to those in the ear-matching task and performed some way below a ceiling level of performance. Consequently, we can reject a simple explanation in terms of a lower
<italic>capacity</italic>
for those in the hand-matching condition to improve with training.</p>
<p>In conclusion, the results of Experiment 2 suggested that the capacity to match hands was not improved by training, with the consequence that small changes in viewpoint were tolerated but larger changes in viewpoint still compromised performance. However, training was effective for participants when matching ears, and as a consequence, the negative impact of moderate viewpoint changes was significantly reduced, though not removed entirely.</p>
</sec>
</sec>
<sec id="Sec27">
<title>General discussion</title>
<p>The purpose of the present paper was to provide an empirical test of the reliability of the hand as a biometric cue when matching a sample to a suspect. The particular question being asked was whether this matching task could still be performed to an adequate level when the viewpoint of the hand changed from optimal, to sub-optimal. Performance here was assessed across a moderate change in viewpoint and across a substantial change in viewpoint. In addition, performance was assessed relative to a control condition in which ears represented the biometric cue. This combination of conditions allowed a test of the prediction that the hand, as an inherently flexible biometric cue, would better survive a change to a sub-optimal viewpoint compared to the ear as a rigid biometric cue.</p>
<p>The results of Experiment 1 confirmed the predictions in all respects. Whilst the matching of both hands and ears was affected by viewpoint changes, hands were affected to a lesser extent. Indeed, no significant decline was observed in hand-matching performance when the viewpoint change was moderate, and performance only significantly declined when the viewpoint change was substantial to provide an unacceptably poor image. In contrast, performance significantly declined when matching ears as soon as any deviation from the ideal viewpoint was introduced. The results of Experiment 2 revealed that simple training minimised these effects for ears when moderate viewpoint changes were introduced, but could not remove the negative effects of viewpoint altogether.</p>
<p>Importantly, these results now provide demonstration of the limits under which the matching of hand images can be considered stable and reliable. As a relatively new biometric, these results are important for the forensic community. Furthermore, they assume particular relevance given the recent concerns over susceptibility to bias amongst forensic scientists in exactly these sorts of matching tasks (see guidance report by the Forensic Science Regulator,
<xref ref-type="bibr" rid="CR13">2014</xref>
; commissioned report by the National Academy of Sciences (NAS),
<xref ref-type="bibr" rid="CR19">2009</xref>
). Both reports note the bias that can arise in decision making when conclusions are based on expert interpretation rather than scientific or metric analysis. Moreover, both reports note that such biases are ‘common features of decision-making and cannot be willed away’ (NAS,
<xref ref-type="bibr" rid="CR19">2009</xref>
, pp 122). For example, when making a decision on whether a latent fingerprint was a match to a suspect, expert interpretation was demonstrably affected by the presentation of fictitious contextual details to bias the outcome one way or the other (Dror & Charlton,
<xref ref-type="bibr" rid="CR10">2006</xref>
). Similar evidence exists in the arenas of DNA matching (Dror & Hampikian,
<xref ref-type="bibr" rid="CR11">2011</xref>
), and more recently in connection with forensic anthropology judgements of sex, ancestry and age at death (Nakhaeizadeh, Dror & Morgan,
<xref ref-type="bibr" rid="CR18">2014</xref>
). In light of such concerns over forensic science judgements through context or framing effects, the importance of research to document the limits of forensic interpretation is noted. Here, the demonstration of reliability when matching hands despite changes in viewpoint goes some way to defining the value of hand matching in a legal setting.</p>
<p>Having said this, it is important to note that whilst levels of performance when matching hands were not significantly affected by moderate changes to viewpoint, those levels of performance demonstrated by both novices and trained participants, were not high. In this regard, it is worth reflecting on the performance of the two experts who provided baseline data within Experiment 1.</p>
<p>Both experts were affected by a change in viewpoint, showing a small decline in performance with a moderate change in viewpoint, and showing a more substantial decline in performance with a more significant change in viewpoint. Interestingly, their confidence dropped sharply when presented with poor images, and in this sense, the experts showed a good metacognitive awareness that their performance had been severely compromised. Confidence from the novices and trained participants suggested less awareness than the experts of their compromised performance.
<xref ref-type="fn" rid="Fn2">2</xref>
Consequently, one important difference between the experts and the participants here is not necessarily in their ability but in their awareness of their ability. This metacognitive monitoring represents an area of emergent interest in the forensic field not least because of its potential to indicate when someone has sufficient belief in their ability to report their testimony in a formal context (see Brewer,
<xref ref-type="bibr" rid="CR5">2006</xref>
for a useful review). However, to establish forensic value, courts will have to establish the confidence levels that they deem acceptable for the purposes of evidential admissibility.</p>
<p>The current study has been heavily influenced by the applied question of whether the hand remains of value as a biometric cue despite changing viewing conditions. However, the work described here is also grounded in a well-established literature regarding canonicality. In this regard, the work presented here may usefully contribute to discussions regarding the cardinal and defining characteristics of the canonical image. From the early work of Blanz et al., (
<xref ref-type="bibr" rid="CR4">1999</xref>
), the defining attributes of the canonical image were identified as recognisability, frequency of exposure, and display of functionality. Whilst previous work had placed importance on frequency of exposure, the current results offer a challenge to this. In fact, one may consider that the display of functionality may be the most important aspect of a canonical image in that it may influence both additional attributes. More specifically, functionality is likely to determine those distinctive aspects of an object that must be portrayed if the object is to be recognisable. Similarly, functionality is likely to influence that view of an object that is most often seen. When presented with a multifunctional or non-rigid object capable of changing shape to fulfil several functions, what is clear is that several defining characteristics will make for a recognisable image, and similarly, several viewpoints will make for a good display of (at least one) function and are likely to drive frequency of exposure. The concept of a single canonical image consequently breaks down and, as seen here, performance on a simple perceptual task can remain robust across viewpoints.</p>
<p>This perspective sits well with more recent discussions regarding the importance of function in canonicality. Specifically, Woods, Moore and Newell (
<xref ref-type="bibr" rid="CR25">2008</xref>
) demonstrated the novel concept of haptic canonicality—a preferential view from which an object may be identified by touch. Their participants showed substantial consistency when orienting an object to ‘an optimal position for learning by touch’. Moreover, these canonical haptic views did indeed lead to better haptic recognition. This is supported by two observations noted within the literature. First, when imaging a familiar asymmetric object such as a teapot, right-handed participants tended to place the handle on the right as they would when grasping it (Blanz et al.,
<xref ref-type="bibr" rid="CR4">1999</xref>
, Expt 2). Second, and more interesting, cases of agnosic patients have been documented who showed an inability to recognise an object from the retinal image, but could instantly identify the object when permitted to pick it up and handle it (see Riddoch & Humpheys,
<xref ref-type="bibr" rid="CR23">1987</xref>
).</p>
<p>Together, these findings support our demonstration that canonicality may depend rather critically on the display of functionality rather than just frequency of exposure. The implication tested here was that multifunctional objects would show weaker preference for a single canonical view, and greater tolerance of changes in view. Within the forensic arena where the matching of biometric cues may be of interest, these results hold value in defining the limits under which performance may remain reliable. However, within a more theoretical arena, these results also hold value as we refine our understanding of the canonical view. </p>
</sec>
<sec sec-type="supplementary-material">
<title>Electronic supplementary material</title>
<sec id="Sec28">
<supplementary-material content-type="local-data" id="MOESM1">
<media xlink:href="426_2014_625_MOESM1_ESM.docx">
<caption>
<p>Supplementary material 1 (DOCX 20 kb)</p>
</caption>
</media>
</supplementary-material>
</sec>
</sec>
</body>
<back>
<fn-group>
<fn id="Fn1">
<label>1</label>
<p>It should be noted the Gomez et al. (
<xref ref-type="bibr" rid="CR14">2008</xref>
, Expt 1) failed to show a clear advantage of canonical study images when participants were required to recognise objects. When conditions permitting picture-related processing are excluded, the novelty of the non-canonical image actually triggered more hits and fewer false alarms than the canonical image.</p>
</fn>
<fn id="Fn2">
<label>2</label>
<p>Whilst it was not a main focus of the current paper, confidence ratings were collected as is standard in the ‘1 in 10’ task. For an analysis of the confidence ratings within this study, please refer to supplemental materials.</p>
</fn>
</fn-group>
<ack>
<p>This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) Grant (EP/J004995/1 SID: An Exploration of SuperIdentity) awarded to the primary author. Colleagues on this grant are thanked for helpful contributions to the current work. In addition, the valuable contribution of Dr Helen Meadows and Dr Christopher Rynn is acknowledged and both are thanked for their expertise in hands and ears, respectively.</p>
<sec id="d30e1313">
<title>Ethical standard</title>
<p>All human studies reported here were approved by the local ethics committee and have been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. In particular, all individuals gave informed consent prior to taking part in the current studies.</p>
</sec>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Black</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Mallett</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Rynn</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Duffield</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Forensic hand image comparison as an aid for paedophile investigations</article-title>
<source>Police Professional</source>
<year>2009</year>
<volume>184</volume>
<fpage>21</fpage>
<lpage>24</lpage>
</element-citation>
</ref>
<ref id="CR1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Black</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>MacDonald-McMillan</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mallett</surname>
<given-names>X</given-names>
</name>
</person-group>
<article-title>The incidence of scarring on the dorsum of the hand</article-title>
<source>International Journal of Legal Medicine</source>
<year>2013</year>
<volume>128</volume>
<issue>3</issue>
<fpage>545</fpage>
<lpage>553</lpage>
<pub-id pub-id-type="doi">10.1007/s00414-013-0834-7</pub-id>
<pub-id pub-id-type="pmid">23404533</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Black</surname>
<given-names>S</given-names>
</name>
<name>
<surname>MacDonald-McMillan</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mallett</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Rynn</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Jackson</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>The incidence and position of melanocytic nevi for the purposes of forensic image comparison</article-title>
<source>International Journal of Legal Medicine</source>
<year>2013</year>
<volume>128</volume>
<issue>3</issue>
<fpage>535</fpage>
<lpage>543</lpage>
<pub-id pub-id-type="doi">10.1007/s00414-013-0821-z</pub-id>
<pub-id pub-id-type="pmid">23420260</pub-id>
</element-citation>
</ref>
<ref id="CR4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blanz</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Tarr</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>What object attributes determine canonical views?</article-title>
<source>Perception</source>
<year>1999</year>
<volume>28</volume>
<issue>5</issue>
<fpage>575</fpage>
<lpage>599</lpage>
<pub-id pub-id-type="doi">10.1068/p2897</pub-id>
<pub-id pub-id-type="pmid">10664755</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brewer</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Uses and abuses of eyewitness identification confidence</article-title>
<source>Legal and Criminological Psychology</source>
<year>2006</year>
<volume>11</volume>
<fpage>3</fpage>
<lpage>23</lpage>
<pub-id pub-id-type="doi">10.1348/135532505X79672</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bruce</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Henderson</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Greenwood</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hancock</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Burton</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Verification of face identities from images captured on video</article-title>
<source>Journal of Experimental Psychology: Applied</source>
<year>1999</year>
<volume>5</volume>
<issue>4</issue>
<fpage>339</fpage>
<lpage>360</lpage>
</element-citation>
</ref>
<ref id="CR7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Edelman</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tarr</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<article-title>How are three-dimensional objects represented in the brain?</article-title>
<source>Cerebral Cortex</source>
<year>1995</year>
<volume>5</volume>
<issue>3</issue>
<fpage>247</fpage>
<lpage>260</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/5.3.247</pub-id>
<pub-id pub-id-type="pmid">7613080</pub-id>
</element-citation>
</ref>
<ref id="CR8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cutzy</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Edelman</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Canonical views in object representation and recognition</article-title>
<source>Vision Research</source>
<year>1994</year>
<volume>34</volume>
<issue>22</issue>
<fpage>3037</fpage>
<lpage>3056</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)90277-1</pub-id>
<pub-id pub-id-type="pmid">7975339</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<mixed-citation publication-type="other">Delac, K., & Grgic, M. (2004). A survey of biometric recognition methods. In: Electronics in Marine, June 2004. Proceedings Elmar 2004. 46th International Symposium, pp. 184–193, IEEE.</mixed-citation>
</ref>
<ref id="CR10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dror</surname>
<given-names>IE</given-names>
</name>
<name>
<surname>Charlton</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Why experts make errors</article-title>
<source>Journal of Forensic Identification</source>
<year>2006</year>
<volume>56</volume>
<fpage>600</fpage>
<lpage>616</lpage>
</element-citation>
</ref>
<ref id="CR11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dror</surname>
<given-names>IE</given-names>
</name>
<name>
<surname>Hampikian</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Subjectivity and bias in forensic DNA mixture interpretation</article-title>
<source>Science and Justice</source>
<year>2011</year>
<volume>51</volume>
<fpage>204</fpage>
<lpage>208</lpage>
<pub-id pub-id-type="doi">10.1016/j.scijus.2011.08.004</pub-id>
<pub-id pub-id-type="pmid">22137054</pub-id>
</element-citation>
</ref>
<ref id="CR12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Edelman</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Orientation dependence in the recognition of familiar and novel views of three-dimensional objects</article-title>
<source>Vision Research</source>
<year>2002</year>
<volume>32</volume>
<issue>12</issue>
<fpage>2385</fpage>
<lpage>2400</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(92)90102-O</pub-id>
<pub-id pub-id-type="pmid">1288015</pub-id>
</element-citation>
</ref>
<ref id="CR13">
<mixed-citation publication-type="other">Forensic Science Regulator (2014). Draft guidance: cognitive bias effects relevant to forensic science examinations.
<ext-link ext-link-type="uri" xlink:href="https://www.gov.uk/government/consultations/cognitive-bias-effects-relevant-to-forensic-science-examinations-draft-guidance">https://www.gov.uk/government/consultations/cognitive-bias-effects-relevant-to-forensic-science-examinations-draft-guidance</ext-link>
.</mixed-citation>
</ref>
<ref id="CR14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gomez</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Shutter</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Rouder</surname>
<given-names>JN</given-names>
</name>
</person-group>
<article-title>Memory for objects in canonical and noncanonical viewpoints</article-title>
<source>Psychonomic Bulletin and Review</source>
<year>2008</year>
<volume>1</volume>
<issue>5</issue>
<fpage>940</fpage>
<lpage>944</lpage>
<pub-id pub-id-type="doi">10.3758/PBR.15.5.940</pub-id>
<pub-id pub-id-type="pmid">18926985</pub-id>
</element-citation>
</ref>
<ref id="CR15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jackson</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Black</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Use of data to inform expert evaluative opinion in the comparison of hand images—the importance of scars</article-title>
<source>International Journal of Legal Medicine</source>
<year>2013</year>
<volume>128</volume>
<issue>3</issue>
<fpage>555</fpage>
<lpage>563</lpage>
<pub-id pub-id-type="doi">10.1007/s00414-013-0828-5</pub-id>
<pub-id pub-id-type="pmid">23381577</pub-id>
</element-citation>
</ref>
<ref id="CR17">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Laeng</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Rouw</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Canonical views of faces and the cerebral hemispheres</article-title>
<source>Laterality</source>
<year>2001</year>
<volume>6</volume>
<issue>3</issue>
<fpage>193</fpage>
<lpage>224</lpage>
<pub-id pub-id-type="pmid">15513170</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Laeng</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Carlesimo</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Caltagirone</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Capasso</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Miceli</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Rigid and Nonrigid Objects in Canonical and Noncanonical Views: hemisphere-Specific Effects on Object Identification</article-title>
<source>Cognitive Neuropsychology</source>
<year>2002</year>
<volume>19</volume>
<issue>8</issue>
<fpage>697</fpage>
<lpage>720</lpage>
<pub-id pub-id-type="doi">10.1080/02643290244000121</pub-id>
<pub-id pub-id-type="pmid">20957560</pub-id>
</element-citation>
</ref>
<ref id="CR18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nakhaeizadeh</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dror</surname>
<given-names>IE</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>RM</given-names>
</name>
</person-group>
<article-title>Cognitive bias in forensic anthropology: visual assessment of skeletal remains is susceptible to confirmation bias</article-title>
<source>Science and Justice</source>
<year>2014</year>
<volume>54</volume>
<fpage>208</fpage>
<lpage>214</lpage>
<pub-id pub-id-type="doi">10.1016/j.scijus.2013.11.003</pub-id>
<pub-id pub-id-type="pmid">24796950</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<mixed-citation publication-type="other">National Academy of Sciences. (2009). Strengthening forensic science in the united states: a path forward.
<ext-link ext-link-type="uri" xlink:href="http://www.nap.edu/catalog/12589.html">http://www.nap.edu/catalog/12589.html</ext-link>
(ISBN: 0.309-13131-6).</mixed-citation>
</ref>
<ref id="CR20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Findlay</surname>
<given-names>JM</given-names>
</name>
</person-group>
<article-title>Is object search mediated by object-based or image-based representations?</article-title>
<source>Spatial Vision</source>
<year>2004</year>
<volume>17</volume>
<issue>4–5</issue>
<fpage>511</fpage>
<lpage>541</lpage>
<pub-id pub-id-type="doi">10.1163/1568568041920140</pub-id>
<pub-id pub-id-type="pmid">15559117</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palmer</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Rosch</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Chase</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Canonical perspective and the perception of objects</article-title>
<source>Attention and performance IX</source>
<year>1981</year>
<volume>1</volume>
<fpage>4</fpage>
</element-citation>
</ref>
<ref id="CR22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perrett</surname>
<given-names>DI</given-names>
</name>
<name>
<surname>Harries</surname>
<given-names>MH</given-names>
</name>
</person-group>
<article-title>Characteristic views and the visual inspection of simple faceted and smooth objects: tetrahedra and potatoes</article-title>
<source>Perception</source>
<year>1988</year>
<volume>17</volume>
<issue>6</issue>
<fpage>703</fpage>
<lpage>720</lpage>
<pub-id pub-id-type="doi">10.1068/p170703</pub-id>
<pub-id pub-id-type="pmid">3253674</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Riddoch</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Humphreys</surname>
<given-names>GW</given-names>
</name>
</person-group>
<article-title>A case of integrative visual agnosia</article-title>
<source>Brain</source>
<year>1987</year>
<volume>110</volume>
<fpage>1431</fpage>
<lpage>1462</lpage>
<pub-id pub-id-type="doi">10.1093/brain/110.6.1431</pub-id>
<pub-id pub-id-type="pmid">3427396</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Troje</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Face recognition under varying pose: the role of texture and shape</article-title>
<source>Vision Research</source>
<year>1996</year>
<volume>36</volume>
<issue>12</issue>
<fpage>1761</fpage>
<lpage>1771</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(95)00230-8</pub-id>
<pub-id pub-id-type="pmid">8759445</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Woods</surname>
<given-names>AT</given-names>
</name>
<name>
<surname>Moore</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
</person-group>
<article-title>Canonical views in haptic object perception</article-title>
<source>Perception</source>
<year>2008</year>
<volume>37</volume>
<issue>12</issue>
<fpage>1867</fpage>
<lpage>1878</lpage>
<pub-id pub-id-type="doi">10.1068/p6038</pub-id>
<pub-id pub-id-type="pmid">19227377</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yan</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bowyer</surname>
<given-names>KW</given-names>
</name>
</person-group>
<article-title>Biometric recognition using 3D ear Shape</article-title>
<source>IEEE: Transactions on Pattern Analysis and Machine Intelligence</source>
<year>2007</year>
<volume>29</volume>
<issue>(8</issue>
<fpage>1297</fpage>
<lpage>1308</lpage>
<pub-id pub-id-type="pmid">17568136</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000878 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000878 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4624835
   |texte=   Testing the reliability of hands and ears as biometrics: the importance of viewpoint
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:25410711" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024