Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic

Identifieur interne : 002D90 ( Ncbi/Merge ); précédent : 002D89; suivant : 002D91

Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic

Auteurs : Matthew Maestri [États-Unis] ; Jeffrey Odel [États-Unis] ; Jay Hegdé [États-Unis]

Source :

RBID : PMC:3941477

Abstract

For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial “proof” of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, “animal,” “dog,” and “retriever” can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.


Url:
DOI: 10.3389/fpsyg.2014.00160
PubMed: 24624102
PubMed Central: 3941477

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3941477

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic</title>
<author>
<name sortKey="Maestri, Matthew" sort="Maestri, Matthew" uniqKey="Maestri M" first="Matthew" last="Maestri">Matthew Maestri</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>James and Jean Culver Vision Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brain and Behavior Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Odel, Jeffrey" sort="Odel, Jeffrey" uniqKey="Odel J" first="Jeffrey" last="Odel">Jeffrey Odel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Ophthalmology, Columbia University College of Physicians and Surgeons</institution>
<country>New York, NY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Hegde, Jay" sort="Hegde, Jay" uniqKey="Hegde J" first="Jay" last="Hegdé">Jay Hegdé</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>James and Jean Culver Vision Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brain and Behavior Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Ophthalmology, Medical College of Georgia, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24624102</idno>
<idno type="pmc">3941477</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3941477</idno>
<idno type="RBID">PMC:3941477</idno>
<idno type="doi">10.3389/fpsyg.2014.00160</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001E97</idno>
<idno type="wicri:Area/Pmc/Curation">001E97</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000958</idno>
<idno type="wicri:Area/Ncbi/Merge">002D90</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic</title>
<author>
<name sortKey="Maestri, Matthew" sort="Maestri, Matthew" uniqKey="Maestri M" first="Matthew" last="Maestri">Matthew Maestri</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>James and Jean Culver Vision Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brain and Behavior Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Odel, Jeffrey" sort="Odel, Jeffrey" uniqKey="Odel J" first="Jeffrey" last="Odel">Jeffrey Odel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Ophthalmology, Columbia University College of Physicians and Surgeons</institution>
<country>New York, NY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Hegde, Jay" sort="Hegde, Jay" uniqKey="Hegde J" first="Jay" last="Hegdé">Jay Hegdé</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>James and Jean Culver Vision Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brain and Behavior Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Ophthalmology, Medical College of Georgia, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial “proof” of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, “animal,” “dog,” and “retriever” can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Atkinson, A P" uniqKey="Atkinson A">A. P. Atkinson</name>
</author>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R. Adolphs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Auerbach, C F" uniqKey="Auerbach C">C. F. Auerbach</name>
</author>
<author>
<name sortKey="Silverstein, L B" uniqKey="Silverstein L">L. B. Silverstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barton, J J" uniqKey="Barton J">J. J. Barton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bazeley, P" uniqKey="Bazeley P">P. Bazeley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Belongie, S" uniqKey="Belongie S">S. Belongie</name>
</author>
<author>
<name sortKey="Malik, J" uniqKey="Malik J">J. Malik</name>
</author>
<author>
<name sortKey="Puzicha, J" uniqKey="Puzicha J">J. Puzicha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brennan, R L" uniqKey="Brennan R">R. L. Brennan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chiappelli, F" uniqKey="Chiappelli F">F. Chiappelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Denzin, N K" uniqKey="Denzin N">N. K. Denzin</name>
</author>
<author>
<name sortKey="Lincoln, Y S" uniqKey="Lincoln Y">Y. S. Lincoln</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Downing, S M" uniqKey="Downing S">S. M. Downing</name>
</author>
<author>
<name sortKey="Haladyna, T M" uniqKey="Haladyna T">T. M. Haladyna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fei Fei, L" uniqKey="Fei Fei L">L. Fei-Fei</name>
</author>
<author>
<name sortKey="Iyer, A" uniqKey="Iyer A">A. Iyer</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C. Koch</name>
</author>
<author>
<name sortKey="Perona, P" uniqKey="Perona P">P. Perona</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gainotti, G" uniqKey="Gainotti G">G. Gainotti</name>
</author>
<author>
<name sortKey="D Rme, P" uniqKey="D Rme P">P D’Erme</name>
</author>
<author>
<name sortKey="De Bonis, C" uniqKey="De Bonis C">C. De Bonis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gainotti, G" uniqKey="Gainotti G">G. Gainotti</name>
</author>
<author>
<name sortKey="D Rme, P" uniqKey="D Rme P">P. D’Erme</name>
</author>
<author>
<name sortKey="Diodato, S" uniqKey="Diodato S">S. Diodato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geissler, H G" uniqKey="Geissler H">H.-G. Geissler</name>
</author>
<author>
<name sortKey="Link, S W" uniqKey="Link S">S. W. Link</name>
</author>
<author>
<name sortKey="Townsend, J T" uniqKey="Townsend J">J. T. Townsend</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gescheider, G A" uniqKey="Gescheider G">G. A. Gescheider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glozman, J M" uniqKey="Glozman J">J. M. Glozman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, D M" uniqKey="Green D">D. M. Green</name>
</author>
<author>
<name sortKey="Swets, J A" uniqKey="Swets J">J. A. Swets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gregory, R J" uniqKey="Gregory R">R. J. Gregory</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grill Spector, K" uniqKey="Grill Spector K">K. Grill-Spector</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gustafson, P" uniqKey="Gustafson P">P. Gustafson</name>
</author>
<author>
<name sortKey="Mccandless, L C" uniqKey="Mccandless L">L. C. McCandless</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haghi, A K" uniqKey="Haghi A">A. K. Haghi</name>
</author>
<author>
<name sortKey="Rocci, L" uniqKey="Rocci L">L. Rocci</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanson, C" uniqKey="Hanson C">C. Hanson</name>
</author>
<author>
<name sortKey="Hanson, S J" uniqKey="Hanson S">S. J. Hanson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartas, D" uniqKey="Hartas D">D. Hartas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hegde, J" uniqKey="Hegde J">J. Hegdé</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Henson, R N" uniqKey="Henson R">R. N. Henson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holcomb, P J" uniqKey="Holcomb P">P. J. Holcomb</name>
</author>
<author>
<name sortKey="Grainger, J" uniqKey="Grainger J">J. Grainger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Joy, S" uniqKey="Joy S">S. Joy</name>
</author>
<author>
<name sortKey="Fein, D" uniqKey="Fein D">D. Fein</name>
</author>
<author>
<name sortKey="Kaplan, E" uniqKey="Kaplan E">E. Kaplan</name>
</author>
<author>
<name sortKey="Freedman, M" uniqKey="Freedman M">M. Freedman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kristjansson, A" uniqKey="Kristjansson A">A. Kristjansson</name>
</author>
<author>
<name sortKey="Campana, G" uniqKey="Campana G">G. Campana</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lazebnik, S" uniqKey="Lazebnik S">S. Lazebnik</name>
</author>
<author>
<name sortKey="Schmid, C" uniqKey="Schmid C">C. Schmid</name>
</author>
<author>
<name sortKey="Ponce, J" uniqKey="Ponce J">J. Ponce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lezak, M D" uniqKey="Lezak M">M. D. Lezak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, J" uniqKey="Liu J">J. Liu</name>
</author>
<author>
<name sortKey="Harris, A" uniqKey="Harris A">A. Harris</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lowe, D G" uniqKey="Lowe D">D. G. Lowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mack, M L" uniqKey="Mack M">M. L. Mack</name>
</author>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
<author>
<name sortKey="Sadr, J" uniqKey="Sadr J">J. Sadr</name>
</author>
<author>
<name sortKey="Palmeri, T J" uniqKey="Palmeri T">T. J. Palmeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mack, M L" uniqKey="Mack M">M. L. Mack</name>
</author>
<author>
<name sortKey="Wong, A C" uniqKey="Wong A">A. C. Wong</name>
</author>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
<author>
<name sortKey="Tanaka, J W" uniqKey="Tanaka J">J. W. Tanaka</name>
</author>
<author>
<name sortKey="Palmeri, T J" uniqKey="Palmeri T">T. J. Palmeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manjunath, B S S" uniqKey="Manjunath B">B. S. S. Manjunath</name>
</author>
<author>
<name sortKey="Salembier, P" uniqKey="Salembier P">P. Salembier</name>
</author>
<author>
<name sortKey="Sikora, T" uniqKey="Sikora T">T. Sikora</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maxfield, L" uniqKey="Maxfield L">L. Maxfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mikolajczyk, K" uniqKey="Mikolajczyk K">K. Mikolajczyk</name>
</author>
<author>
<name sortKey="Schmid, C" uniqKey="Schmid C">C. Schmid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milberg, W P" uniqKey="Milberg W">W. P. Milberg</name>
</author>
<author>
<name sortKey="Hebben, N" uniqKey="Hebben N">N. Hebben</name>
</author>
<author>
<name sortKey="Kaplan, E" uniqKey="Kaplan E">E. Kaplan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miles, M B" uniqKey="Miles M">M. B. Miles</name>
</author>
<author>
<name sortKey="Huberman, A M" uniqKey="Huberman A">A. M. Huberman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mitchell, K J" uniqKey="Mitchell K">K. J. Mitchell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ochsner, K N" uniqKey="Ochsner K">K. N. Ochsner</name>
</author>
<author>
<name sortKey="Chiu, C Y" uniqKey="Chiu C">C. Y. Chiu</name>
</author>
<author>
<name sortKey="Schacter, D L" uniqKey="Schacter D">D. L. Schacter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ogden Epker, M" uniqKey="Ogden Epker M">M. Ogden-Epker</name>
</author>
<author>
<name sortKey="Cullum, C M" uniqKey="Cullum C">C. M. Cullum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pachalska, M" uniqKey="Pachalska M">M. Pachalska</name>
</author>
<author>
<name sortKey="Grochmal Bach, B" uniqKey="Grochmal Bach B">B. Grochmal-Bach</name>
</author>
<author>
<name sortKey="Macqueen, B D" uniqKey="Macqueen B">B. D. Macqueen</name>
</author>
<author>
<name sortKey="Wilk, M" uniqKey="Wilk M">M. Wilk</name>
</author>
<author>
<name sortKey="Lipowska, M" uniqKey="Lipowska M">M. Lipowska</name>
</author>
<author>
<name sortKey="Herman Sucharska, I" uniqKey="Herman Sucharska I">I. Herman-Sucharska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmeri, T J" uniqKey="Palmeri T">T. J. Palmeri</name>
</author>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phelps, R P" uniqKey="Phelps R">R. P. Phelps</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pollefeys, M" uniqKey="Pollefeys M">M. Pollefeys</name>
</author>
<author>
<name sortKey="Gool, L V" uniqKey="Gool L">L. V. Gool</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poreh, A M" uniqKey="Poreh A">A. M. Poreh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reynolds, C R" uniqKey="Reynolds C">C. R. Reynolds</name>
</author>
<author>
<name sortKey="Willson, V L" uniqKey="Willson V">V. L. Willson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosch, E" uniqKey="Rosch E">E. Rosch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salkind, N J" uniqKey="Salkind N">N. J. Salkind</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sauro, J" uniqKey="Sauro J">J. Sauro</name>
</author>
<author>
<name sortKey="Lewis, J R" uniqKey="Lewis J">J. R. Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Snavely, N" uniqKey="Snavely N">N. Snavely</name>
</author>
<author>
<name sortKey="Seitz, S M" uniqKey="Seitz S">S. M. Seitz</name>
</author>
<author>
<name sortKey="Szeliski, R" uniqKey="Szeliski R">R. Szeliski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sugase, Y" uniqKey="Sugase Y">Y. Sugase</name>
</author>
<author>
<name sortKey="Yamane, S" uniqKey="Yamane S">S. Yamane</name>
</author>
<author>
<name sortKey="Ueno, S" uniqKey="Ueno S">S. Ueno</name>
</author>
<author>
<name sortKey="Kawano, K" uniqKey="Kawano K">K. Kawano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Torrance, H" uniqKey="Torrance H">H. Torrance</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Udupa, J K" uniqKey="Udupa J">J. K. Udupa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Willig, C R" uniqKey="Willig C">C. R. Willig</name>
</author>
<author>
<name sortKey="Stainton Rogers, W" uniqKey="Stainton Rogers W">W. Stainton-Rogers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, Y" uniqKey="Xu Y">Y. Xu</name>
</author>
<author>
<name sortKey="Liu, J" uniqKey="Liu J">J. Liu</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zipf Williams, E M" uniqKey="Zipf Williams E">E. M. Zipf-Williams</name>
</author>
<author>
<name sortKey="Shear, P K" uniqKey="Shear P">P. K. Shear</name>
</author>
<author>
<name sortKey="Strongin, D" uniqKey="Strongin D">D. Strongin</name>
</author>
<author>
<name sortKey="Winegarden, B J" uniqKey="Winegarden B">B. J. Winegarden</name>
</author>
<author>
<name sortKey="Morrell, M J" uniqKey="Morrell M">M. J. Morrell</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24624102</article-id>
<article-id pub-id-type="pmc">3941477</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.00160</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Methods Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Maestri</surname>
<given-names>Matthew</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Odel</surname>
<given-names>Jeffrey</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Hegdé</surname>
<given-names>Jay</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>James and Jean Culver Vision Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Brain and Behavior Discovery Institute, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Ophthalmology, Columbia University College of Physicians and Surgeons</institution>
<country>New York, NY, USA</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Department of Ophthalmology, Medical College of Georgia, Georgia Regents University</institution>
<country>Augusta, GA, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by:
<italic>Holmes Finch, Ball State University, USA</italic>
</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by:
<italic>Holmes Finch, Ball State University, USA; Jocelyn E. Bolin, Ball State University, USA</italic>
</p>
</fn>
<corresp id="fn001">*Correspondence:
<italic>Jay Hegdé, James and Jean Culver Vision Discovery Institute, Georgia Regents University, CL-3033, 1120 15th Street, Augusta, GA 30912, USA e-mail:
<email xlink:type="simple">jay@hegde.us</email>
</italic>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Quantitative Psychology and Measurement, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>3</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>160</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>8</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>2</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Maestri, Odel and Hegdé.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p> This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial “proof” of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, “animal,” “dog,” and “retriever” can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.</p>
</abstract>
<kwd-group>
<kwd>qualitative research</kwd>
<kwd>natural language processing</kwd>
<kwd>semantic processing</kwd>
<kwd>visual cognition</kwd>
<kwd>neuropsychological tests</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="1"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="58"></ref-count>
<page-count count="9"></page-count>
<word-count count="0"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>INTRODUCTION</title>
<p>In real-world situations, our perception of visual scenes tends to be complex and nuanced, with rich semantic content. Capturing this complexity is critical not only for the study and treatment of visual dysfunction, but also for the study of normal visual function. For practical reasons, the available quantitative tests of visual perception tend to use relatively simple visual stimuli and tasks that constrain the responses of the test subject (e.g., contrast sensitivity test, line bisection test, star cancellation test), so that the responses can be precisely measured and quantitatively analyzed (
<xref rid="B17" ref-type="bibr">Green and Swets, 1974</xref>
;
<xref rid="B15" ref-type="bibr">Gescheider, 1997</xref>
;
<xref rid="B30" ref-type="bibr">Lezak, 2012</xref>
).</p>
<p>The importance and usefulness of traditional quantitative tests in research and clinical settings is indisputable. But it is also clear that quantitative tests of visual perception have a major drawback, in that they fail to capture the complexity of visual function and dysfunction in real life. That is, the complex, qualitative nature of normal high-level visual perception under real life conditions is all but impossible to measure using the available quantitative tests. Impairments of high-level visual perception are similarly, hard to measure.</p>
<p>At the other end of visual testing spectrum, qualitative tests of visual function have a roughly complementary set of strengths and weaknesses, in that while they are much better at capturing the nuances of high-level vision under real-world conditions, the outcomes of these tests are hard to quantify (
<xref rid="B39" ref-type="bibr">Miles and Huberman, 1994</xref>
;
<xref rid="B47" ref-type="bibr">Poreh, 2000</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
). Imagine, for instance, a clinical provider trying to quantify the visual deficit in a patient with agnosia, or inability to recognize objects. A typical test is to show the patients drawings of everyday objects, such as a pen, mug etc., and ask them to redraw and name it. Patients with a clear-cut apperceptive agnosia fail both to draw and name the object, whereas patients with clear-cut associative agnosia generally are able to draw the object, but not to name it. Even when the outcome of the test is clear-cut as this, it is hard to measure the quality and the completeness of the drawings and naming. Moreover, the actual clinical outcomes are rarely as clear-cut, with most patients showing symptoms that cannot be neatly pigeonholed into either of the above two extremes (
<xref rid="B1" ref-type="bibr">Atkinson and Adolphs, 2011</xref>
;
<xref rid="B3" ref-type="bibr">Barton, 2011</xref>
;
<xref rid="B40" ref-type="bibr">Mitchell, 2011</xref>
). Furthermore, the outcomes of this test are affected by an array of complexities of agnosia. Thus, while the test outcomes are rich in qualitative information, it is hard to measure this information. This is a well-documented shortcoming of qualitative tests in general (
<xref rid="B12" ref-type="bibr">Gainotti et al., 1985</xref>
,
<xref rid="B11" ref-type="bibr">1989</xref>
;
<xref rid="B38" ref-type="bibr">Milberg et al., 1996</xref>
;
<xref rid="B16" ref-type="bibr">Glozman, 1999</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
;
<xref rid="B43" ref-type="bibr">Pachalska et al., 2008</xref>
).</p>
<p>Quantifying qualitative reports would effectively meld the best of both worlds, by combining the ability of the qualitative methods to capture the richness of the visual experience in the real-world with the scientific rigor of the quantitative methods. A large number of such methods have been developed, with applications in clinical care, educational testing, machine learning and scientific research (for reviews, see
<xref rid="B55" ref-type="bibr">Udupa, 1999</xref>
;
<xref rid="B2" ref-type="bibr">Auerbach and Silverstein, 2003</xref>
;
<xref rid="B20" ref-type="bibr">Gustafson and McCandless, 2010</xref>
;
<xref rid="B51" ref-type="bibr">Sauro and Lewis, 2012</xref>
;
<xref rid="B4" ref-type="bibr">Bazeley, 2013</xref>
). While a review of this large and diverse literature is beyond the purview of the present report, two aspects of the quantification process are particularly worth noting. First, the existing methods generally require that the qualitative report be formatted or structured (e.g., questionnaires), so as to streamline the quantification process. That is, the underlying qualitative reports are generally not open-ended. Second, to our knowledge, no methods exist in clinical, psychophysical or machine learning literature for creating a numeric representation of verbal reports. This latter issue is particularly relevant when dealing with real-world visual percepts, which have a rich semantic content (
<xref rid="B39" ref-type="bibr">Miles and Huberman, 1994</xref>
;
<xref rid="B16" ref-type="bibr">Glozman, 1999</xref>
;
<xref rid="B47" ref-type="bibr">Poreh, 2000</xref>
;
<xref rid="B58" ref-type="bibr">Zipf-Williams et al., 2000</xref>
;
<xref rid="B27" ref-type="bibr">Joy et al., 2001</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
).</p>
<p>In this report, we propose a novel methodological principle that will help address both of the aforementioned shortcomings of the currently available approaches, and is well suited to complement (albeit not replace) the rich array of available methods. Our method, which we will refer to as semantic descriptor ranking (SDR), allows quantification of open-ended, verbal reports of visual scenes. We illustrate its implementation using perceptual reports of complex real-world scenes by healthy subjects. As noted above, the present report only aims to provide a proof of concept of the proposed method, i.e., that the proposed method is feasible. Our implementation will also help highlight issues involved in the future development and refinement of the proposed method, including its standardization and validation (
<xref rid="B16" ref-type="bibr">Glozman, 1999</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
;
<xref rid="B7" ref-type="bibr">Chiappelli, 2008</xref>
;
<xref rid="B50" ref-type="bibr">Salkind, 2010</xref>
).</p>
</sec>
<sec sec-type="materials|methods" id="s1">
<title>MATERIALS AND METHODS</title>
<sec>
<title>PARTICIPANTS</title>
<p>Fourteen different volunteer adults (6 females) participated in one or both of the two experiments that constituted this study. Subjects were 19 to 31 years of age (median age, 24 years). In either experiment, some participants participated as subjects who viewed the stimuli and reported their percepts, and others participated as evaluators, who scored the subjects’ reported percepts. No one participated both as a subject and as an evaluator. That is, no one who participated in either experiment as a subject also participated in either experiment as an evaluator, or
<italic>vice versa</italic>
. All subjects had normal or corrected-to-normal vision, with no known neurological or psychiatric disorders.</p>
<p>Experiment 1 consisted of six subjects and two evaluators, and Experiment 2 consisted of eight subjects and two evaluators. All participants gave informed consent prior to participating in the study. All protocols used in the experiment were approved in advance by the Human Assurance Committee at the Georgia Regents University, where this study was carried out.</p>
</sec>
<sec>
<title>VISUAL STIMULATION</title>
<p>In Experiment 1, 50 different real-world photographs from the Corel Stock Photo Library (Corel Corporation, Ottawa, ON, Canada) were used as visual stimuli in this study (see, e.g.,
<bold>Figures
<xref ref-type="fig" rid="F1">1</xref>
</bold>
and
<bold>
<xref ref-type="fig" rid="F3">3</xref>
</bold>
). Subjects sat comfortably approximately 30 cm in front of a computer monitor in a normally lit room (ambient luminance, 14.6 cd/m
<sup>2</sup>
). Each trial started when the subject indicated readiness by pressing a button. The visual stimulus was presented for 50 ms or 17 ms, depending on the condition, followed by a random dot mask (
<bold>Figure
<xref ref-type="fig" rid="F1">1</xref>
</bold>
). These two stimulus durations correspond to 1 or 3 frame durations of the computer monitor at a screen refresh rate of 60 Hz.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>
<bold>Workflow of SDR.</bold>
The three main steps of SDR, which involve obtaining, scoring and analyzing the subject’s reports respectively, are shown. Note that the subject, the evaluator, and the experimenter plays the most prominent role in Steps 1, 2, and 3, respectively. The two sub-steps of Step 2 in which the evaluator scores the subject’s reported percept are illustrated here using a hypothetical exemplar image (not shown) in which one of the image elements is a dog. Sub-steps 2a and 2b are repeated for each image element in each image (not shown). See text for details.</p>
</caption>
<graphic xlink:href="fpsyg-05-00160-g001"></graphic>
</fig>
<p>Trials were presented in a pseudo-random order. To minimize the contribution of stimulus repetition on the subject’s reports (
<xref rid="B41" ref-type="bibr">Ochsner et al., 1994</xref>
;
<xref rid="B36" ref-type="bibr">Maxfield, 1997</xref>
;
<xref rid="B13" ref-type="bibr">Gauthier, 2000</xref>
;
<xref rid="B25" ref-type="bibr">Henson, 2003</xref>
;
<xref rid="B26" ref-type="bibr">Holcomb and Grainger, 2007</xref>
;
<xref rid="B28" ref-type="bibr">Kristjansson and Campana, 2010</xref>
), we ensured that the 50 ms viewing of a given stimulus preceded its 17 ms viewing within the pseudo-random sequence of trials.</p>
<p>The stimulus subtended 9° × 6° (for landscape format pictures; the reverse for portrait format pictures), and had an average luminance of 30.2 cd/m
<sup>2</sup>
, and was presented against a uniform gray screen of the same mean luminance. The mask had the same average luminance and subtended 9° × 9°. Following the mask, the subject had unlimited time to orally describe, in his/her own words and with no prompting or feedback, what he/she saw in the visual stimulus. The description was audio-recorded.</p>
<p>Experiment 2 was identical to Experiment 1 except that it used a different, non-overlapping set of 50 images, and a different, but partially overlapping set of subjects and evaluators.</p>
</sec>
<sec>
<title>RATIONALE BEHIND SDR</title>
<p>Semantic descriptor ranking takes advantage of the fact that our semantic understanding, and therefore the reported percept, of visual objects tends to have a naturally hierarchical structure: A large number of previous studies have shown that our understanding of real-world objects generally (although not always, see Discussion) follows a hierarchical pattern of categories (
<xref rid="B49" ref-type="bibr">Rosch, 1973</xref>
;
<xref rid="B22" ref-type="bibr">Hanson and Hanson, 2005</xref>
;
<xref rid="B24" ref-type="bibr">Hegdé, 2008</xref>
). For instance, a particular pet dog named “Spike” can be thought of, in an order of increasingly finer categorization, as an object, an animal, a mammal, a dog, a retriever, a Golden retriever, and finally as a particular dog named Spike. This hierarchical organization lends itself to ranking, so that the above descriptors can be rank-ordered, in increasing order of specificity, as object < animal < mammal < dog < retriever < Golden retriever < Spike. Similarly, “brown dog” can be reasonably considered a more specific, and therefore higher ranking, description than “dog”. These ranks can be analyzed using the established rank-based statistical methods.</p>
<p>Given that ranking semantic “tags,” or descriptors, is central to our method, we refer to it as SDR. We use the term “semantic descriptor” to mean a word or phrase (i.e., a verbal “tag”) that describes a given object, to distinguish it from the term “[image] descriptor” commonly used in machine vision, which generally refers to various lower-level properties of the image, such as color, texture, or local shape (
<xref rid="B32" ref-type="bibr">Lowe, 1999</xref>
;
<xref rid="B37" ref-type="bibr">Mikolajczyk and Schmid, 2001</xref>
;
<xref rid="B5" ref-type="bibr">Belongie et al., 2002</xref>
;
<xref rid="B35" ref-type="bibr">Manjunath et al., 2002</xref>
;
<xref rid="B46" ref-type="bibr">Pollefeys and Gool, 2002</xref>
;
<xref rid="B29" ref-type="bibr">Lazebnik et al., 2005</xref>
;
<xref rid="B52" ref-type="bibr">Snavely et al., 2006</xref>
).</p>
</sec>
<sec>
<title>IMPLEMENTATIONS OF SDR: VARIATIONS OF A THEME</title>
<p>A typical implementation of SDR would consist of the following three steps, in order (see
<bold>Figure
<xref ref-type="fig" rid="F1">1</xref>
</bold>
; also see below): (1) Subjects freely view pictures of real-world scenes and describe in their own words what they see. (2) A set of independent evaluators examine each subject’s reports and rank the descriptors according to how specific the descriptors are. Since each descriptor will be assigned a rank score, the report as a whole will typically consist of multiple rank scores. Collectively, these rank scores are a numeric representation of the verbal report. (3) The experimenters analyze the numeric representations using conventional statistical methods.</p>
<p>Note that a large number of variations of the above theme are possible; one can customize SDR for a given purpose by appropriately varying one or more of the above three steps. Indeed, the only two crucial requirements of SDR are that (a) the reports be verbal (i.e., spoken or written), (b) the image or scene underlying the report be available for independent evaluation (i.e., the evaluator be able to see what the subject is seeing).</p>
<p>With these minimum requirements met, one can create a numeric representation of a given perceptual report of interest (“query representation”) and appropriately compare it to a reference of some sort. Note that this reference can be arrived at by any of a large number of possible principled methods. For instance, the reference representation can be obtained using the same subject viewing the same image under a different viewing condition (e.g., different stimulus duration, see below). For a hemineglect patient, for instance, the query and reference representations can be obtained using stimulus presentations in the affected and spared hemisphere, respectively. Each of these instances makes for a two-sample, within-subject paired design, where the query- vs. reference representations constitute the two samples. Alternatively, one can use a one-sample design, where the query representation from one subject can be compared against an existing reference sample obtained from, say, a large number of other subjects. Note that the query and/or reference representations can, in principle, be obtained using machine vision algorithms, rather than human subjects (see Discussion).</p>
</sec>
</sec>
<sec>
<title>RESULTS</title>
<sec>
<title>AN ILLUSTRATIVE IMPLEMENTATION AND PROOF OF PRINCIPLE OF SDR</title>
<p>We will illustrate the use of SDR using a two-sample, within-subject paired design that compared the verbal reports of each given subject on the same set of images using two different stimulus durations. This design exploits the previously known fact that, in general, longer viewing of visual stimuli elicits finer-grained perception than briefer viewing (
<xref rid="B53" ref-type="bibr">Sugase et al., 1999</xref>
;
<xref rid="B31" ref-type="bibr">Liu et al., 2002</xref>
;
<xref rid="B19" ref-type="bibr">Grill-Spector and Kanwisher, 2005</xref>
;
<xref rid="B24" ref-type="bibr">Hegdé, 2008</xref>
; also see Discussion).</p>
<p>We carried out two experiments. Experiment 1 compared the reports elicited by the viewing of the same set of real-world scenes for long vs. brief durations (50 ms vs. 17 ms, respectively; see Materials and Methods for details). It tested the hypothesis, using SDR, that the responses elicited by the 50 ms viewing collectively will have higher rankings than the responses elicited by the 17 ms viewing.</p>
</sec>
<sec>
<title>STEP 1: OBTAINING QUALITATIVE REPORTS FROM THE SUBJECTS</title>
<p>Subjects viewed natural images one per trial, presented for either 50 ms or 17 ms, depending on the trial (
<bold>Figure
<xref ref-type="fig" rid="F2">2</xref>
</bold>
; see Methods for details). After a brief mask, the subjects were allowed unlimited time to describe, in their own words and
<italic>ad libitum</italic>
, what they saw in the stimulus. The subjects’ reports were audio-recorded.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>
<bold>Trial paradigm used in our implementation of Step 1.</bold>
Each trial started when the subject indicated readiness. The visual stimulus (a real-world image) was presented for 17 ms or 50 ms, depending on the trial. To minimize the contribution of stimulus repetition on the subject’s reports, each given image was presented for the longer stimulus first, as described in Materials and Methods. After the 100 ms mask, the subject was allowed unlimited time to describe, in his/her own words, what he/she perceived in the stimulus. The figure is not drawn to scale.</p>
</caption>
<graphic xlink:href="fpsyg-05-00160-g002"></graphic>
</fig>
<p>Each subject viewed each image twice, first for the longer stimulus duration and then for the shorter duration, in blocks of randomly interleaved trials (see Methods). The rationale for always first viewing the image for the longer duration was to minimize the contributions of priming/exposure effects, where a previously viewed stimulus tends to elicit better recognition during subsequent viewing (
<xref rid="B41" ref-type="bibr">Ochsner et al., 1994</xref>
;
<xref rid="B36" ref-type="bibr">Maxfield, 1997</xref>
;
<xref rid="B13" ref-type="bibr">Gauthier, 2000</xref>
;
<xref rid="B25" ref-type="bibr">Henson, 2003</xref>
;
<xref rid="B26" ref-type="bibr">Holcomb and Grainger, 2007</xref>
;
<xref rid="B28" ref-type="bibr">Kristjansson and Campana, 2010</xref>
). Note that this meant that the priming/exposure effects would actually tend to counteract, i.e., reduce, the expected increase in rankings upon longer stimulus viewing. Thus, our method would have to find duration-dependent effects, if any, over and above the counteracting effects of priming.</p>
</sec>
<sec>
<title>STEP 2. INDEPENDENT RANKING OF THE SUBJECT’S REPORTS BY EVALUATORS</title>
<p>This step essentially consisted of ranking, separately by each of the evaluators, of the descriptors used by subjects in their oral report. This is the crucial step in SDR, in which qualitative reports of the subjects are converted into quantitative measures.</p>
<p>Before the evaluations began, the evaluators received extensive training in the relevant procedures. In addition to the routine scoring procedures (outlined in
<bold>Figures
<xref ref-type="fig" rid="F1">1</xref>
</bold>
and
<bold>
<xref ref-type="fig" rid="F3">3</xref>
</bold>
), we devised a set of somewhat arbitrary, but principled, evaluation rules for handling special cases (some of which are shown in
<bold>Table
<xref ref-type="table" rid="T1">1</xref>
</bold>
) in order to help ensure that these cases were handled as consistently as possible. Note that the evaluation rules can be customized for each given application of SDR. Note also that it is possible, in principle, to write computer programs to automate the evaluation process.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>
<bold>Two instances of implementation of SDR steps 1 and 2.</bold>
Panels
<bold>A</bold>
through
<bold>C</bold>
show the scoring of one set of verbal reports, and panels
<bold>D</bold>
through
<bold>F</bold>
show the scoring of a second, independent set.
<bold>(A)</bold>
Stimulus. Subject viewed the stimulus for 50 ms and 17 ms respectively in randomly interleaved trials, so that subject reports were paired across the experiment. Note even though each subject viewed the same stimulus twice, the longer viewing duration always preceded the shorter viewing duration, so as to counteract priming effects, if any (see text for details).
<bold>(B)</bold>
Percepts of the stimulus in panel
<bold>A</bold>
as reported by a subject for each of the two stimulus durations.
<bold>(C)</bold>
Scoring of the subject’s reports in panel
<bold>B</bold>
by the evaluator. Note that, although the subject’s descriptions of the building were spread over multiple sentences, the evaluator grouped them together into a single descriptor, in accordance with the scoring rules. Columns corresponding to various image elements are highlighted in different colors solely to enhance visibility.
<bold>(D–F)</bold>
Scoring of a different pair of reports.</p>
</caption>
<graphic xlink:href="fpsyg-05-00160-g003"></graphic>
</fig>
<p>Each subject’s reports were scored by multiple evaluators independently of each other and of the subject. The scoring process consisted of two sub-steps (
<bold>Figures
<xref ref-type="fig" rid="F1">1</xref>
</bold>
and
<bold>
<xref ref-type="fig" rid="F3">3C,F</xref>
</bold>
): Sub-step 2a consisted of the evaluator’s own offline analysis of each image prior to evaluating the subject’s reports, in which each evaluator viewed each given image
<italic>ad libitum</italic>
, and wrote down any number of semantic descriptors as he/she thought were needed to capture what was in the image. Each descriptor was assigned an arbitrary baseline value of 10. It is important to emphasize that the absolute value of the baseline score (or of other scores for that matter, see below) is unimportant; any value that allows sufficient room for deductions and bonuses (i.e., sufficient spread from the baseline) will suffice. That is, what matters in our particular implementation of SDR are the
<italic>relative</italic>
scores, rather than the
<italic>absolute</italic>
scores, since our implementation ultimately uses rank statistics (see Discussion for other implementation options). For the same reason, the absolute hierarchical level of the descriptor (“Dog” vs. “Golden retriever”) that a given evaluator comes up with [which, among other things, depends on the expertise of the evaluator (
<xref rid="B49" ref-type="bibr">Rosch, 1973</xref>
;
<xref rid="B44" ref-type="bibr">Palmeri and Gauthier, 2004</xref>
); also see Discussion] does not matter in the present context either.</p>
<p>In Sub-step 2b, the evaluators listened to the audio recording of the subject’s perceptual report of the same stimulus, and scored the subject’s descriptions of the image
<italic>relative to the evaluator’s image descriptors from Sub-step 2a</italic>
according to a set of pre-specified rules (see
<bold>Table
<xref ref-type="table" rid="T1">1</xref>
</bold>
). If the subject’s descriptor was deemed to be essentially the same as the corresponding descriptor of the evaluator (e.g., “dog”), the subject’s report for the given image descriptor was also assigned the baseline value.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>Selected special case rules.</p>
</caption>
<table frame="hsides" rules="groups" cellspacing="5" cellpadding="5">
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="1">Rules</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">1. Objects (i.e., nouns, such as “dog”) are primary descriptors, while adjectives/modifiers such as colors (e.g., “black”) are secondary descriptors. Descriptions with correct primary and secondary descriptors should receive higher ranking than descriptions with a correct primary descriptor but without a secondary descriptor.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">2. If the primary descriptor is correct, but the secondary descriptor is wrong, award the appropriate points for the correct primary descriptor, and simply ignore the incorrect secondary descriptor, but do not deduct points for it.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">For example, if the stimulus contains a red car, and the subject’s report describes a red car, then award plus a bonus point for the correct secondary identifier. But if the subject reports a blue car, simply take the bonus points away, but do not deduct from the point you were going to award for the correct primary descriptor. The reason for this rule is to ensure that, in the above case for instance, “blue car” does not receive fewer points than simply “car.”</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">3. Miss Rule. If an object is present in the image, but it is not reported, then award a score of 0 for that descriptor.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">4. False Alarm Rule. If an object that is not present in the image is reported, then assess a penalty of -1. For example, if the subject reports a car when, in fact, there is no car in the picture, then the score should be reduced by 1. Also assess a penalty if an object is reported as something else entirely. For example, the image contains a tree and the subject reports a building instead of a tree then a penalty of -1 should be assessed.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">5. If there is more than one object of the same kind (e.g., more than one person) award a bonus of +1 for each additional person recognized. However, there is no penalty if the subject does not report all the persons in the image. The following are just two examples and could apply for any type of objects.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Example 1: An image has three dogs. The subject reports three dogs. The score should be 10 + 1 + 1 = 12. Default score of 10 for one recognized and 1 point added per dog.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Example 2: An image has three dogs. The subject reports one dog. Then it is still rewarded the standard 10 for recognition of a dog, and no penalty for not identifying the rest.</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">6. In those cases where the secondary descriptor is redundant with the primary descriptor (e.g., “blue sky,” “green grass”) do not award extra points for the secondary descriptor. When the secondary descriptor is not redundant (e.g., the stimulus contains brown grass), award bonus points for correct secondary descriptor (in this case, “brown”).</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>If the subject’s description was more specific (“Golden retriever”) than that of the evaluator, the subject’s description was assigned a correspondingly higher score. The exact decrement or increment of the score was up to the evaluator, but he/she was required to be consistent about it across subjects. For instance, “Golden retriever” can be reasonably considered one, or two, ranks higher in terms of the level of categorization than “Dog”, depending on whether the evaluator recognizes an intermediate category of “Retriever.” Similarly, if the subject’s descriptor was less specific (“animal”) than the evaluator’s corresponding descriptor (“dog”), the subject’s report was given a correspondingly lower score.</p>
<p>If the subject failed to report a given object altogether, the given image descriptor was assigned a value of 0 (“Miss Rule” in
<bold>Figure
<xref ref-type="fig" rid="F1">1</xref>
</bold>
; also see
<bold>Table
<xref ref-type="table" rid="T1">1</xref>
</bold>
). If the subject misidentified an image element (e.g., when a Golden retriever was identified as “Border Collie”), the subject was penalized one or more points according to the hierarchical level of the reported identifier (“False Alarm Rule”). That is, the subject was awarded the appropriate score for having recognized that it was a dog, and was then penalized 1 point for misidentifying the breed. Note that while this is a somewhat arbitrary rule, it is also principled, and has considerable precedence (
<xref rid="B17" ref-type="bibr">Green and Swets, 1974</xref>
;
<xref rid="B14" ref-type="bibr">Geissler et al., 1992</xref>
). Note, in any event, that the drawbacks of our implementation, such as they may be, are not the drawbacks of SDR
<italic>per se</italic>
. Investigators can take advantage of the basic SDR principle but nonetheless devise their own set of implementation rules.</p>
<p>An actual image used in Experiment 1 is shown in
<bold>Figure
<xref ref-type="fig" rid="F3">3A</xref>
</bold>
. The reports of one subject after viewing it for 50 ms and 17 ms are shown in 3B. The corresponding scoring of a typical evaluator scored the two reports using our scoring method is shown in
<bold>Figure
<xref ref-type="fig" rid="F3">3C</xref>
</bold>
. Note that, as expected, the report for longer duration elicited ratings equal to or better than, the baseline scores for all image identifiers, whereas with the shorter duration, the subject missed a few image identifiers. Thus, the scoring method did reveal that longer viewing also produced a finer-grained percept of the image.
<bold>Figures
<xref ref-type="fig" rid="F3">3D–F</xref>
</bold>
illustrate the reports of another subject of a different image and the corresponding scores assigned by a different evaluator. The scores were lower for the shorter image duration, because the subject misidentified the snowy background in this case.</p>
<p>Finally, to help account for individual differences across subjects and evaluators, we repeated the first three steps independently across multiple subjects and evaluators (
<xref rid="B48" ref-type="bibr">Reynolds and Willson, 1985</xref>
;
<xref rid="B39" ref-type="bibr">Miles and Huberman, 1994</xref>
;
<xref rid="B38" ref-type="bibr">Milberg et al., 1996</xref>
;
<xref rid="B47" ref-type="bibr">Poreh, 2000</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
). Some of the representative results are shown in
<bold>Figure
<xref ref-type="fig" rid="F4">4A</xref>
</bold>
in a color-coded format. In general, subjects’ reports for the longer stimulus duration elicited larger scores than their reports of the same image for the shorter viewing duration, as denoted by the fact that there were a greater number of descriptors and more of the descriptors had higher-than-baseline values (i.e., greener cells) in
<bold>Figure
<xref ref-type="fig" rid="F4">4A</xref>
</bold>
.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>
<bold>Comparison of subjects’ reports for the short (17 ms) and long (50 ms) stimulus durations.</bold>
Subjects viewed images for either stimulus duration, and reported their percepts using the paradigm illustrated in
<bold>Figure
<xref ref-type="fig" rid="F2">2</xref>
</bold>
. Subjects’ reports were scored by an evaluator as illustrated in
<bold>Figures
<xref ref-type="fig" rid="F1">1</xref>
</bold>
and
<bold>
<xref ref-type="fig" rid="F3">3</xref>
</bold>
, and the resulting scores are shown in this figure in color-coded fashion. Panels
<bold>A</bold>
though
<bold>C</bold>
show data from three different, representative data sets, each obtained using the same set of images. Each row in each panel shows the numeric representation of a single verbal report. Rows are matched across panels
<bold>A–C</bold>
, so that, for instance, row 7 in each panel denotes reports of the same image by different subjects and/or during different sessions. Each column denotes a different descriptor. Note that the columns are not necessarily matched across panels
<bold>A–C</bold>
(although they’re exactly matched within each panel), because the subjects did not necessarily describe the same set of image elements even though the underlying images were the same. The order of rows or columns has no particular meaning, so that the only meaningful comparison is between paired cells within each data set. All data are rendered on a black background according to the color scale at top right. White cells denote image elements for which the subject used the same descriptor as the baseline descriptor set by the given evaluator. Green and red hues denote image descriptors that were, respectively, more specific or less specific than the evaluator’s baseline descriptors. Gray cells denote the descriptors the subject used during one viewing, but omitted during the other.
<bold>(A)</bold>
Reports of Subject 00 as scored by Evaluator 02. Since this particular subject reported í9 descriptors for any given image, there are nine columns in this data set.
<bold>(B)</bold>
and
<bold>(C)</bold>
denote the reports of Subject 01 in two successive, duplicate sessions 9 days apart, as scored by same evaluator (Evaluator 01). See text for details.</p>
</caption>
<graphic xlink:href="fpsyg-05-00160-g004"></graphic>
</fig>
</sec>
<sec>
<title>STEP 3:
<italic>Post hoc</italic>
STATISTICAL ANALYSES OF THE EVALUATORS’ SCORES</title>
<p>We tested each numerical score produced by the evaluators using the conventional paired two-sample Mann–Whitney test. As noted above, based on previous studies, we expect
<italic>a</italic>
<italic>priori</italic>
that longer stimulus durations produce finer-grained percepts (
<xref rid="B31" ref-type="bibr">Liu et al., 2002</xref>
;
<xref rid="B19" ref-type="bibr">Grill-Spector and Kanwisher, 2005</xref>
;
<xref rid="B57" ref-type="bibr">Xu et al., 2005</xref>
; but see
<xref rid="B33" ref-type="bibr">Mack et al., 2008</xref>
,
<xref rid="B34" ref-type="bibr">2009</xref>
). For each of the three subjects and either evaluator, 50 ms viewing of the images did elicit significantly finer-level categorization (one-tailed paired Mann–Whitney,
<italic>p</italic>
< 0.05 in all cases).</p>
<p>To test the reproducibility of the results, we re-tested one subject after a 9-day delay, so as to minimize priming or other memory-related effects from the first session. The scores of the two sessions are shown in
<bold>Figures
<xref ref-type="fig" rid="F4">4B,C</xref>
</bold>
. The scores were statistically indistinguishable between the two sessions (2-way ANOVA, session × stimulus duration;
<italic>p</italic>
< 0.05 for stimulus duration factor and
<italic>p</italic>
> 0.05 for session and interaction factors).</p>
<p>The scores were also consistent between the two evaluators across all datasets (Cronbach’s alpha test, α = 0.87; data not shown). Thus, the scores did not significantly depend on the particular evaluator used.</p>
<p>A principled validation method for the scoring algorithm is to test whether the scores can predict the corresponding stimulus condition. The underlying rationale is that if the numerical scores of the evaluators reliably reflect the reports, and the reports in turn are a reliable reflection of the stimulus duration, then it should be possible to predict the stimulus duration based on the corresponding scores. We found this to be true for all six data sets (Spearman rank correlation,
<italic>r</italic>
≥ 0.67,
<italic>df</italic>
= 49 for all six data sets; data not shown), indicating that the scores reliably reflect the underlying stimulus conditions.</p>
<p>We obtained qualitatively similar results in Experiment 2, which used a different set of images, subjects and evaluators, compared to those used in Experiment 1 (data not shown). Together, the results of the two experiments indicate that our above results are not idiosyncratic to the stimuli, subject and evaluators used. Our results also indicate that SDR is a sensitive technique that can detect relatively subtle differences in visual perception, given the fact that the differences in stimulus durations was relatively small (17 ms vs. 50 ms).</p>
</sec>
</sec>
<sec>
<title>DISCUSSION</title>
<sec>
<title>STRENGTHS AND POTENTIAL APPLICATIONS OF SDR</title>
<p>The main novelty of SDR is that it is a method for numerically representing verbal descriptions of the ground truth (in the present case, the visual images). To our knowledge, methods to do this simply do not exist at present. Note that, reduced to its essentials, SDR requires only that the ground truth that the verbal account describes be available for independent evaluation. Given this simplicity of its requirements, SDR is potentially applicable to wide variety of potential real-world applications in which qualitative, verbal descriptions of real-world experiences need to be quantified (see below).</p>
<p>Our experimental results demonstrate that SDR is useful for quantifying qualitative reports of visual scenes. Although we illustrate the method by varying stimulus duration, we expect that the method should be applicable to any case in which subjective experience, visual or otherwise, are verbally reported, by normal subjects or patients, as long as the ground truth that elicited the experience can be independently evaluated. It also stands to reason that second-hand reports of percepts, such as a clinical provider’s verbal observations of the patient’s behavior, can be similarly, quantified using the same underlying principles.</p>
<p>Three main strengths of SDR are particularly worth noting. First, it places very few constraints on the patients (or subjects), in that it allows patients to view the stimuli freely and naturally, and describe their percepts in their own words. This allows the researcher, clinician or the machine learning algorithm to evaluate the subject/patient in a setting that is natural and minimally stressful. In this sense, our method is different from other methods of quantifying qualitative data, which generally require streamlining or formatting of the qualitative data, e.g., using questionnaires or forms (for reviews, see
<xref rid="B55" ref-type="bibr">Udupa, 1999</xref>
;
<xref rid="B2" ref-type="bibr">Auerbach and Silverstein, 2003</xref>
;
<xref rid="B20" ref-type="bibr">Gustafson and McCandless, 2010</xref>
;
<xref rid="B51" ref-type="bibr">Sauro and Lewis, 2012</xref>
;
<xref rid="B4" ref-type="bibr">Bazeley, 2013</xref>
). Second, SDR can, in principle, preserve much of the richness of the verbal reports, depending on the rules and algorithms used for evaluating the reports. Note also that the scores need not necessarily be integer rank scores; it should be possible, in principle, to develop algorithms for assigning fractional scores that treat the underlying descriptors as values of a continuous variable, rather than of a discrete or categorical variable. Third, as noted above, this method is likely to be flexible and versatile, with a broad array of potential applications, given that its requirements are ultimately minimal,
<italic>viz</italic>
., a verbal description and the ground truth that elicited the description. For this reason, SDR should be applicable to a wide variety of stimuli (including drawings, photographs, or videos, and non-visual stimuli such as sounds and haptic objects), the aspect of the stimulus perceived (such as some affective aspect of the stimulus, the texture of an object, the origin of a sound, etc.). Thus, a pollster using focus groups to evaluate the impact a political or commercial advertisement can use the same set of SDR principles as an ophthalmologist or a neurologist evaluating a patient’s deficits in one or more of the senses, an educator testing students or a recruiter testing the aptitude of the applicants to comprehend complex real-world situations.</p>
<p>It is worth noting that, as alluded to in the Results section, machine learning methods can be devised to carry out the aforementioned steps 2 (independent evaluation of the subjects’ reports) and 3 (
<italic>post hoc</italic>
statistical analyses of the evaluators’ reports) of SDR. This would make the given implementation of SDR more objective by removing the contribution of the evaluators’ subjectivity from the process. In addition, our method has potential applications to machine learning itself, because it allows machines to process language using a numerical representation thereof. To our knowledge, methods to do this do not currently exist either.</p>
</sec>
<sec>
<title>SOME IMPORTANT CAVEATS AND POTENTIAL FUTURE IMPROVEMENTS</title>
<p>There are four caveats that are particularly important to note. First, as noted earlier, our results only provide a “proof of concept,” and do not, by themselves fully validate this method. In order to validate SDR, one needs to show that SDR independently produces essentially the same results as that obtained by a different, established method (for reviews, see
<xref rid="B56" ref-type="bibr">Willig and Stainton-Rogers, 2008</xref>
;
<xref rid="B8" ref-type="bibr">Denzin and Lincoln, 2011</xref>
;
<xref rid="B30" ref-type="bibr">Lezak, 2012</xref>
). SDR also needs to be standardized for each intended purpose. For instance, the conditions under which it yields the most reliable results for a given purpose (e.g., evaluating hemianopsia patients) remain to be delineated. The scoring rules also need to be further developed and standardized. Standardizing and cross-validating SDR will also help further delineate its strengths, weaknesses, potential applications, and limitations. Note that the fact that SDR needs to be developed and refined further before it can be used in real-world applications does not by itself undermine the value of the underlying concept. After all, test development is necessarily an iterative process; any testing method has to undergo the aforementioned development process (
<xref rid="B6" ref-type="bibr">Brennan et al., 2006</xref>
;
<xref rid="B9" ref-type="bibr">Downing and Haladyna, 2006</xref>
;
<xref rid="B45" ref-type="bibr">Phelps, 2007</xref>
;
<xref rid="B18" ref-type="bibr">Gregory, 2010</xref>
).</p>
<p>Second, SDR is meant not to supplant, but rather to supplement, the existing qualitative and quantitative methods. This caveat is particularly important in view of the fact that this method is yet to be tested extensively, and its strengths and weaknesses empirically documented. Specifically, it should be noted that SDR is by no means a universally applicable method for quantifying for qualitative reports, especially in cases where the underlying descriptors may not be reliably rank-ordered, e.g., in educational research (
<xref rid="B23" ref-type="bibr">Hartas, 2010</xref>
;
<xref rid="B54" ref-type="bibr">Torrance, 2010</xref>
;
<xref rid="B21" ref-type="bibr">Haghi and Rocci, 2013</xref>
). Moreover, as alluded to in the Results section, a verbal report, however indirect, is a prerequisite of SDR.</p>
<p>Third, the numerical scores of the evaluators are meant to be used in statistical tests that compare the relative values, not the absolute values, of the scores, such as rank-order or rank-sum tests. This is because our tests do not correct for the criterion level of the individual evaluator, e.g., whether a given evaluator may tend to score the reports “generously.” Using the relative values of the scores tends to correct for this, although only to the extent that a given evaluator’s criterion remains unchanged across the relevant dataset. To correct for these criterion effects, and to obviate the need for rank-based statistics, one can average over a large number of randomly chosen evaluators. For instance, one can create a large database of reports and scores for each given set of stimuli that can be used as a reference distribution to correct for any deviations from the norm. Note, incidentally, that having such a database also obviates the need to carry out paired statistics or even two-sample statistics, because the researcher can always compare a given single sample, e.g., a given subject’s reports for a stimulus duration of 17 ms, against a standard reference distribution of reports for that duration.</p>
<p>Finally, SDR is based on the hierarchical nature of object percepts, and therefore is not currently suited to evaluate percepts that are not hierarchical. This is especially true of affective percepts. However, by using a reference distribution as outlined above, one can extend our method to the assessment of non-hierarchical percepts.</p>
</sec>
<sec>
<title>RELATION TO PREVIOUS WORK</title>
<p>What is most novel about our approach is that it exploits the hierarchical organization of natural objects to generate an arbitrarily rich numeric representation of the reported visual percept. To the best of our knowledge, methods to do this simply do not exist at present. But other aspects of our method, including the use of independent evaluators, have been previously used in other studies of visual dysfunction as well as normal visual function (
<xref rid="B39" ref-type="bibr">Miles and Huberman, 1994</xref>
;
<xref rid="B16" ref-type="bibr">Glozman, 1999</xref>
;
<xref rid="B47" ref-type="bibr">Poreh, 2000</xref>
;
<xref rid="B58" ref-type="bibr">Zipf-Williams et al., 2000</xref>
;
<xref rid="B27" ref-type="bibr">Joy et al., 2001</xref>
;
<xref rid="B42" ref-type="bibr">Ogden-Epker and Cullum, 2001</xref>
;
<xref rid="B10" ref-type="bibr">Fei-Fei et al., 2007</xref>
). Having independent evaluators independently scoring the subjects reports is effective, because it tends to average out random variance among evaluators while leaving intact non-random variance – that is, using independent evaluators helps achieve a measure of objectivity by way of shared subjectivity (
<xref rid="B24" ref-type="bibr">Hegdé, 2008</xref>
).</p>
<p>In the ultimate analysis, the utility of SDR is that it provides a novel approach to grappling with the breathtaking complexity and richness of our subjective visual experience. In this regard, it is of great potential utility in research, clinical and machine vision contexts alike.</p>
</sec>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported by a pilot grant from the James and Jean Culver Vision Discovery Institute of the Georgia Regents University to Jay Hegdé and Jeffrey Odel, and by the US Army Research Laboratory and the US Army Research Office grant W911NF-11-1-0105 to Jay Hegdé. We are grateful to our many colleagues, most especially Dr. Evgeniy Bart and Dr. Christie Palladino, who made helpful comments on various versions of this manuscript.</p>
</ack>
<ref-list>
<title>REFERENCES</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Atkinson</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Adolphs</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The neuropsychology of face perception: beyond simple dissociations and functional selectivity.</article-title>
<source>
<italic>Philos. Trans. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>366</volume>
<fpage>1726</fpage>
<lpage>1738</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2010.0349</pub-id>
<pub-id pub-id-type="pmid">21536556</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Auerbach</surname>
<given-names>C. F.</given-names>
</name>
<name>
<surname>Silverstein</surname>
<given-names>L. B.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>
<italic>Qualitative Data : An Introduction to Coding and Analysis</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>New York University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barton</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Disorders of higher visual processing.</article-title>
<source>
<italic>Handb. Clin. Neurol.</italic>
</source>
<volume>102</volume>
<fpage>223</fpage>
<lpage>261</lpage>
<pub-id pub-id-type="doi">10.1016/B978-0-444-52903-9.00015-7</pub-id>
<pub-id pub-id-type="pmid">21601069</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bazeley</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>
<italic>Qualitative Data Analysis : Practical Strategies</italic>
.</article-title>
<publisher-loc>Thousand Oaks</publisher-loc>
:
<publisher-name>Sage Publications</publisher-name>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Belongie</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Malik</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Puzicha</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Shape matching and object recognition using shape contexts.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>24</volume>
<fpage>509</fpage>
<lpage>522</lpage>
<pub-id pub-id-type="doi">10.1109/34.993558</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Brennan</surname>
<given-names>R. L.</given-names>
</name>
</person-group>
<article-title>National Council on Measurement in Education American Council on Education.</article-title>
(
<year>2006</year>
).
<source>
<italic>Educational Measurement</italic>
</source>
.
<publisher-loc>Westport</publisher-loc>
:
<publisher-name>Praeger Publishers</publisher-name>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Chiappelli</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>
<italic>Manual of Evidence-Based Research for the Health Sciences : Implication for Clinical Dentistry</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Nova Science Publishers</publisher-name>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Denzin</surname>
<given-names>N. K.</given-names>
</name>
<name>
<surname>Lincoln</surname>
<given-names>Y. S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>
<italic>The Sage Handbook of Qualitative Research</italic>
.</article-title>
<publisher-loc>Thousand Oaks</publisher-loc>
:
<publisher-name>Sage</publisher-name>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Downing</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Haladyna</surname>
<given-names>T. M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>
<italic>Handbook of Test Development</italic>
.</article-title>
<publisher-loc>Mahwah</publisher-loc>
:
<publisher-name>L. Erlbaum</publisher-name>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fei-Fei</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Iyer</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Perona</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>What do we perceive in a glance of a real-world scene?</article-title>
<source>
<italic>J. Vis.</italic>
</source>
<volume>7</volume>
<issue>10</issue>
<pub-id pub-id-type="doi">10.1167/7.1.10</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gainotti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>D’Erme</surname>
<given-names>P</given-names>
</name>
<name>
<surname>De Bonis</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>[Clinical aspects and mechanisms of visual-spatial neglect].</article-title>
<source>
<italic>Rev. Neurol. (Paris)</italic>
</source>
<volume>145</volume>
<fpage>626</fpage>
<lpage>634</lpage>
<pub-id pub-id-type="pmid">2682937</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gainotti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>D’Erme</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Diodato</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Are drawing errors different in right-sided and left-sided constructional apraxics?</article-title>
<source>
<italic>Ital. J. Neurol. Sci.</italic>
</source>
<volume>6</volume>
<fpage>495</fpage>
<lpage>501</lpage>
<pub-id pub-id-type="doi">10.1007/BF02331044</pub-id>
<pub-id pub-id-type="pmid">4086270</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Visual priming: the ups and downs of familiarity.</article-title>
<source>
<italic>Curr. Biol.</italic>
</source>
<volume>10</volume>
<fpage>R753</fpage>
<lpage>R756</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-9822(00)00738-7</pub-id>
<pub-id pub-id-type="pmid">11069101</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Geissler</surname>
<given-names>H.-G.</given-names>
</name>
<name>
<surname>Link</surname>
<given-names>S. W.</given-names>
</name>
<name>
<surname>Townsend</surname>
<given-names>J. T.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>
<italic>Cognition, Information Processing, and Psychophysics : Basic Issues</italic>
.</article-title>
<publisher-loc>Hillsdale</publisher-loc>
:
<publisher-name>L. Erlbaum Associates</publisher-name>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gescheider</surname>
<given-names>G. A.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>
<italic>Psychophysics : The Fundamentals</italic>
.</article-title>
<publisher-loc>Mahwah</publisher-loc>
:
<publisher-name>L. Erlbaum Associates</publisher-name>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glozman</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Quantitative and qualitative integration of Lurian procedures.</article-title>
<source>
<italic>Neuropsychol. Rev.</italic>
</source>
<volume>9</volume>
<fpage>23</fpage>
<lpage>32</lpage>
<pub-id pub-id-type="doi">10.1023/A:1025638903874</pub-id>
<pub-id pub-id-type="pmid">10468374</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Swets</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>1974</year>
).
<article-title>Signal detection theory and psychophysics.</article-title>
<publisher-loc>Huntington</publisher-loc>
:
<publisher-name>R. E. Krieger Pub. Co</publisher-name>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gregory</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>
<italic>Psychological Testing : History, Principles, and Applications</italic>
.</article-title>
<publisher-loc>New Jersey</publisher-loc>
:
<publisher-name>Pearson</publisher-name>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grill-Spector</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Visual recognition: as soon as you know it is there, you know what it is.</article-title>
<source>
<italic>Psychol. Sci.</italic>
</source>
<volume>16</volume>
<fpage>152</fpage>
<lpage>160</lpage>
<pub-id pub-id-type="doi">10.1111/j.0956-7976.2005.00796.x</pub-id>
<pub-id pub-id-type="pmid">15686582</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gustafson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>McCandless</surname>
<given-names>L. C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Probabilistic approaches to better quantifying the results of epidemiologic studies.</article-title>
<source>
<italic>Int. J. Environ. Res. Public Health</italic>
</source>
<volume>7</volume>
<fpage>1520</fpage>
<lpage>1539</lpage>
<pub-id pub-id-type="doi">10.3390/ijerph7041520</pub-id>
<pub-id pub-id-type="pmid">20617044</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Haghi</surname>
<given-names>A. K.</given-names>
</name>
<name>
<surname>Rocci</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>
<italic>Education for A Digital World : Present Realities and Future Possibilities</italic>
.</article-title>
<publisher-loc>Toronto</publisher-loc>
:
<publisher-name>Apple Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hanson</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>S. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>“Categorization in neuroscience: brain response to objects and events,” in</article-title>
<source>
<italic>Handbook of Categorization in Cognitive Science</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Cohen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Lefebvre</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<publisher-loc>San Diego</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
)
<fpage>119</fpage>
<lpage>140</lpage>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hartas</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>
<italic>Educational Research and Inquiry : Qualitative and Quantitative Approaches</italic>
.</article-title>
<publisher-loc>London</publisher-loc>
:
<publisher-name>Continuum</publisher-name>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hegdé</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Time course of visual perception: coarse-to-fine processing and beyond.</article-title>
<source>
<italic>Prog. Neurobiol.</italic>
</source>
<volume>84</volume>
<fpage>405</fpage>
<lpage>439</lpage>
<pub-id pub-id-type="doi">10.1016/j.pneurobio.2007.09.001</pub-id>
<pub-id pub-id-type="pmid">17976895</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Henson</surname>
<given-names>R. N.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Neuroimaging studies of priming.</article-title>
<source>
<italic>Prog. Neurobiol.</italic>
</source>
<volume>70</volume>
<fpage>53</fpage>
<lpage>81</lpage>
<pub-id pub-id-type="doi">10.1016/S0301-0082(03)00086-8</pub-id>
<pub-id pub-id-type="pmid">12927334</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holcomb</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Grainger</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Exploring the temporal dynamics of visual word recognition in the masked repetition priming paradigm using event-related potentials.</article-title>
<source>
<italic>Brain Res.</italic>
</source>
<volume>1180</volume>
<fpage>39</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2007.06.110</pub-id>
<pub-id pub-id-type="pmid">17950262</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Joy</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Fein</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Freedman</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Quantifying qualitative features of Block Design performance among healthy older adults.</article-title>
<source>
<italic>Arch. Clin. Neuropsychol.</italic>
</source>
<volume>16</volume>
<fpage>157</fpage>
<lpage>170</lpage>
<pub-id pub-id-type="doi">10.1093/arclin/16.2.157</pub-id>
<pub-id pub-id-type="pmid">14590184</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kristjansson</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Campana</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Where perception meets memory: a review of repetition priming in visual search tasks.</article-title>
<source>
<italic>Atten. Percept. Psychophys.</italic>
</source>
<volume>72</volume>
<fpage>5</fpage>
<lpage>18</lpage>
<pub-id pub-id-type="doi">10.3758/APP.72.1.5</pub-id>
<pub-id pub-id-type="pmid">20045875</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lazebnik</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schmid</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ponce</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>A sparse texture representation using local affine regions.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>27</volume>
<fpage>1265</fpage>
<lpage>1278</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2005.151</pub-id>
<pub-id pub-id-type="pmid">16119265</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lezak</surname>
<given-names>M. D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>
<italic>Neuropsychological Assessment</italic>
.</article-title>
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Stages of processing in face perception: an MEG study.</article-title>
<source>
<italic>Nat. Neurosci.</italic>
</source>
<volume>5</volume>
<fpage>910</fpage>
<lpage>916</lpage>
<pub-id pub-id-type="doi">10.1038/nn909</pub-id>
<pub-id pub-id-type="pmid">12195430</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lowe</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>“Object recognition from local scale-invariant features,” in</article-title>
<source>
<italic>Proceedings IEEE International Conference on Computer Vision 2</italic>
</source>
(
<publisher-loc>Washington</publisher-loc>
:
<publisher-name>IEEE Computer Society</publisher-name>
)
<fpage>1150</fpage>
<lpage>1157</lpage>
<pub-id pub-id-type="doi">10.1109/ICCV.1999.790410</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mack</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Sadr</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Palmeri</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Object detection and basic-level categorization: sometimes you know it is there before you know what it is.</article-title>
<source>
<italic>Psychon. Bull. Rev.</italic>
</source>
<volume>15</volume>
<fpage>28</fpage>
<lpage>35</lpage>
<pub-id pub-id-type="doi">10.3758/PBR.15.1.28</pub-id>
<pub-id pub-id-type="pmid">18605476</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mack</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Tanaka</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Palmeri</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Time course of visual object categorization: fastest does not necessarily mean first.</article-title>
<source>
<italic>Vis. Res.</italic>
</source>
<volume>49</volume>
<fpage>1961</fpage>
<lpage>1968</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2009.05.005</pub-id>
<pub-id pub-id-type="pmid">19460401</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Manjunath</surname>
<given-names>B. S. S.</given-names>
</name>
<name>
<surname>Salembier</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Sikora</surname>
<given-names>T.</given-names>
</name>
</person-group>
<comment>ed.</comment>
(
<year>2002</year>
).
<article-title>
<italic>Introduction to MPEG-7: Multimedia Content Description Interface</italic>
.</article-title>
<publisher-loc>Hoboken</publisher-loc>
:
<publisher-name> Wiley and Sons</publisher-name>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maxfield</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Attention and semantic priming: a review of prime task effects.</article-title>
<source>
<italic>Conscious. Cogn.</italic>
</source>
<volume>6</volume>
<fpage>204</fpage>
<lpage>218</lpage>
<pub-id pub-id-type="doi">10.1006/ccog.1997.0311</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mikolajczyk</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Schmid</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>“Indexing based on scale invariant interest points,” in</article-title>
<source>
<italic>Proceedings IEEE International Conference on Computer Vision 1</italic>
</source>
(
<publisher-loc>Los Alamitos</publisher-loc>
:
<publisher-name>IEEE Computer Society</publisher-name>
),
<fpage>525</fpage>
<lpage>531</lpage>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Milberg</surname>
<given-names>W. P.</given-names>
</name>
<name>
<surname>Hebben</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>“The Boston process approach to neuropsychological assessment,” in</article-title>
<source>
<italic>Neuropsychological Assessment of Neuropsychiatric Disorders</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Grant</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Adams</surname>
<given-names>K. M.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
).</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Miles</surname>
<given-names>M. B.</given-names>
</name>
<name>
<surname>Huberman</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>
<italic>Qualitative Data Analysis : An Expanded Sourcebook</italic>
.</article-title>
<publisher-loc>Thousand Oaks</publisher-loc>
:
<publisher-name>Sage Publications</publisher-name>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mitchell</surname>
<given-names>K. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Curiouser and curiouser: genetic disorders of cortical specialization.</article-title>
<source>
<italic>Curr. Opin. Genet. Dev.</italic>
</source>
<volume>21</volume>
<fpage>271</fpage>
<lpage>277</lpage>
<pub-id pub-id-type="doi">10.1016/j.gde.2010.12.003</pub-id>
<pub-id pub-id-type="pmid">21296568</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ochsner</surname>
<given-names>K. N.</given-names>
</name>
<name>
<surname>Chiu</surname>
<given-names>C. Y.</given-names>
</name>
<name>
<surname>Schacter</surname>
<given-names>D. L.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Varieties of priming.</article-title>
<source>
<italic>Curr. Opin. Neurobiol.</italic>
</source>
<volume>4</volume>
<fpage>189</fpage>
<lpage>194</lpage>
<pub-id pub-id-type="doi">10.1016/0959-4388(94)90071-X</pub-id>
<pub-id pub-id-type="pmid">8038575</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ogden-Epker</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cullum</surname>
<given-names>C. M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Quantitative and qualitative interpretation of neuropsychological data in the assessment of temporal lobectomy candidates.</article-title>
<source>
<italic>Clin. Neuropsychol.</italic>
</source>
<volume>15</volume>
<fpage>183</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="doi">10.1076/clin.15.2.183.1900</pub-id>
<pub-id pub-id-type="pmid">11528540</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pachalska</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Grochmal-Bach</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Macqueen</surname>
<given-names>B. D.</given-names>
</name>
<name>
<surname>Wilk</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lipowska</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Herman-Sucharska</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Neuropsychological diagnosis and treatment after closed-head injury in a patient with a psychiatric history of schizophrenia.</article-title>
<source>
<italic>Med. Sci. Monit.</italic>
</source>
<volume>14</volume>
<fpage>CS76</fpage>
<lpage>CS85</lpage>
<pub-id pub-id-type="pmid">18668003</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palmeri</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Visual object understanding.</article-title>
<source>
<italic>Nat. Rev. Neurosci.</italic>
</source>
<volume>5</volume>
<fpage>291</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="doi">10.1038/nrn1364</pub-id>
<pub-id pub-id-type="pmid">15034554</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Phelps</surname>
<given-names>R. P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>
<italic>Standardized Testing Primer</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Peter Lang</publisher-name>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pollefeys</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gool</surname>
<given-names>L. V.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>From images to 3D models.</article-title>
<source>
<italic>Commun. ACM</italic>
</source>
<volume>45</volume>
<fpage>50</fpage>
<lpage>55</lpage>
<pub-id pub-id-type="doi">10.1145/514236.514263</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poreh</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>The quantified process approach: an emerging methodology to neuropsychological assessment.</article-title>
<source>
<italic>Clin. Neuropsychol.</italic>
</source>
<volume>14</volume>
<fpage>212</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="doi">10.1076/1385-4046(200005)14:2;1-Z;FT212</pub-id>
<pub-id pub-id-type="pmid">10916196</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Reynolds</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Willson</surname>
<given-names>V. L.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>
<italic>Methodological and Statistical Advances in the Study of Individual Differences</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Plenum Press</publisher-name>
<pub-id pub-id-type="doi">10.1007/978-1-4684-4940-2</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosch</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Natural categories.</article-title>
<source>
<italic>Cogn. Psychol.</italic>
</source>
<volume>4</volume>
<fpage>328</fpage>
<lpage>350</lpage>
<pub-id pub-id-type="doi">10.1016/0010-0285(73)90017-0</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Salkind</surname>
<given-names>N. J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>
<italic>Encyclopedia of Research Design</italic>
.</article-title>
<publisher-loc>Thousand Oaks</publisher-loc>
:
<publisher-name>SAGE Publications</publisher-name>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sauro</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>
<italic>Quantifying the User Experience : Practical Statistics for User Research</italic>
.</article-title>
<publisher-loc>Amsterdam</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Snavely</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Seitz</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Szeliski</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>“Photo tourism: exploring photo collections in 3D,” in</article-title>
<source>
<italic>Proceeding ACM Transactions on Graphics, SIGGRAPH ‘06 ACM SIGGRAPH 2006 </italic>
</source>
<volume>Vol. 25</volume>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>ACM</publisher-name>
)
<fpage>835</fpage>
<lpage>846</lpage>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sugase</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yamane</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ueno</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kawano</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Global and fine information coded by single neurons in the temporal visual cortex.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>400</volume>
<fpage>869</fpage>
<lpage>873</lpage>
<pub-id pub-id-type="doi">10.1038/23703</pub-id>
<pub-id pub-id-type="pmid">10476965</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Torrance</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>
<italic>Qualitative Research Methods in Education</italic>
.</article-title>
<publisher-loc>London</publisher-loc>
:
<publisher-name>SAGE</publisher-name>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Udupa</surname>
<given-names>J. K.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Three-dimensional visualization and analysis methodologies: a current perspective.</article-title>
<source>
<italic>Radiographics</italic>
</source>
<volume>19</volume>
<fpage>783</fpage>
<lpage>806</lpage>
<pub-id pub-id-type="doi">10.1148/radiographics.19.3.g99ma13783</pub-id>
<pub-id pub-id-type="pmid">10336203</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Willig</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Stainton-Rogers</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>
<italic>The SAGE Handbook of Qualitative Research in Psychology</italic>
.</article-title>
<publisher-loc>London</publisher-loc>
:
<publisher-name>SAGE Publications</publisher-name>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The M170 is selective for faces, not for expertise.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>43</volume>
<fpage>588</fpage>
<lpage>597</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2004.07.016</pub-id>
<pub-id pub-id-type="pmid">15716149</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zipf-Williams</surname>
<given-names>E. M.</given-names>
</name>
<name>
<surname>Shear</surname>
<given-names>P. K.</given-names>
</name>
<name>
<surname>Strongin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Winegarden</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Morrell</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Qualitative block design performance in epilepsy patients.</article-title>
<source>
<italic>Arch. Clin. Neuropsychol.</italic>
</source>
<volume>15</volume>
<fpage>149</fpage>
<lpage>157</lpage>
<pub-id pub-id-type="doi">10.1093/arclin/15.2.149</pub-id>
<pub-id pub-id-type="pmid">14590558</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Maestri, Matthew" sort="Maestri, Matthew" uniqKey="Maestri M" first="Matthew" last="Maestri">Matthew Maestri</name>
</noRegion>
<name sortKey="Hegde, Jay" sort="Hegde, Jay" uniqKey="Hegde J" first="Jay" last="Hegdé">Jay Hegdé</name>
<name sortKey="Hegde, Jay" sort="Hegde, Jay" uniqKey="Hegde J" first="Jay" last="Hegdé">Jay Hegdé</name>
<name sortKey="Hegde, Jay" sort="Hegde, Jay" uniqKey="Hegde J" first="Jay" last="Hegdé">Jay Hegdé</name>
<name sortKey="Maestri, Matthew" sort="Maestri, Matthew" uniqKey="Maestri M" first="Matthew" last="Maestri">Matthew Maestri</name>
<name sortKey="Odel, Jeffrey" sort="Odel, Jeffrey" uniqKey="Odel J" first="Jeffrey" last="Odel">Jeffrey Odel</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002D90 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002D90 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3941477
   |texte=   Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:24624102" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024