Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations

Identifieur interne : 000273 ( Pmc/Curation ); précédent : 000272; suivant : 000274

Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations

Auteurs : Megan A. K. Peters [États-Unis] ; Jonathan Balzer [États-Unis] ; Ladan Shams [États-Unis]

Source :

RBID : PMC:4358826

Abstract

If one nondescript object’s volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers’ predictions about objects’ weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, “twice as large” doesn’t mean “twice as heavy”: Smaller objects are typically denser, following a power function of volume. Interestingly, this “smaller is denser” relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions quantitatively match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems.


Url:
DOI: 10.1371/journal.pone.0119794
PubMed: 25768977
PubMed Central: 4358826

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4358826

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations</title>
<author>
<name sortKey="Peters, Megan A K" sort="Peters, Megan A K" uniqKey="Peters M" first="Megan A. K." last="Peters">Megan A. K. Peters</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Balzer, Jonathan" sort="Balzer, Jonathan" uniqKey="Balzer J" first="Jonathan" last="Balzer">Jonathan Balzer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Department of Computer Science, University of California Los Angeles, 4732 Boelter Hall, Los Angeles, California, 90095, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Computer Science, University of California Los Angeles, 4732 Boelter Hall, Los Angeles, California, 90095</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Bioengineering, University of California Los Angeles, 420 Westwood Plaza, 5121 Engineering V, Los Angeles, California, 90095–1600, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Bioengineering, University of California Los Angeles, 420 Westwood Plaza, 5121 Engineering V, Los Angeles, California, 90095–1600</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25768977</idno>
<idno type="pmc">4358826</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4358826</idno>
<idno type="RBID">PMC:4358826</idno>
<idno type="doi">10.1371/journal.pone.0119794</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000273</idno>
<idno type="wicri:Area/Pmc/Curation">000273</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations</title>
<author>
<name sortKey="Peters, Megan A K" sort="Peters, Megan A K" uniqKey="Peters M" first="Megan A. K." last="Peters">Megan A. K. Peters</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Balzer, Jonathan" sort="Balzer, Jonathan" uniqKey="Balzer J" first="Jonathan" last="Balzer">Jonathan Balzer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Department of Computer Science, University of California Los Angeles, 4732 Boelter Hall, Los Angeles, California, 90095, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Computer Science, University of California Los Angeles, 4732 Boelter Hall, Los Angeles, California, 90095</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Bioengineering, University of California Los Angeles, 420 Westwood Plaza, 5121 Engineering V, Los Angeles, California, 90095–1600, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Bioengineering, University of California Los Angeles, 420 Westwood Plaza, 5121 Engineering V, Los Angeles, California, 90095–1600</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>If one nondescript object’s volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers’ predictions about objects’ weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, “twice as large” doesn’t mean “twice as heavy”: Smaller objects are typically denser, following a power function of volume. Interestingly, this “smaller is denser” relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions
<italic>quantitatively</italic>
match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Kawato, M" uniqKey="Kawato M">M Kawato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miall, Rc" uniqKey="Miall R">RC Miall</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scott, Sh" uniqKey="Scott S">SH Scott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buckingham, G" uniqKey="Buckingham G">G Buckingham</name>
</author>
<author>
<name sortKey="Cant, Js" uniqKey="Cant J">JS Cant</name>
</author>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buckingham, G" uniqKey="Buckingham G">G Buckingham</name>
</author>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Bittner, J" uniqKey="Bittner J">J Bittner</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="King, S" uniqKey="King S">S King</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Am" uniqKey="Gordon A">AM Gordon</name>
</author>
<author>
<name sortKey="Forssberg, H" uniqKey="Forssberg H">H Forssberg</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Westling, G" uniqKey="Westling G">G Westling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Am" uniqKey="Gordon A">AM Gordon</name>
</author>
<author>
<name sortKey="Forssberg, H" uniqKey="Forssberg H">H Forssberg</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Westling, G" uniqKey="Westling G">G Westling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mon Williams, M" uniqKey="Mon Williams M">M Mon-Williams</name>
</author>
<author>
<name sortKey="Murray, Ah" uniqKey="Murray A">AH Murray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Adams, W" uniqKey="Adams W">W Adams</name>
</author>
<author>
<name sortKey="Graf, E" uniqKey="Graf E">E Graf</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hedges, Jh" uniqKey="Hedges J">JH Hedges</name>
</author>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Y" uniqKey="Weiss Y">Y Weiss</name>
</author>
<author>
<name sortKey="Simoncelli, E" uniqKey="Simoncelli E">E Simoncelli</name>
</author>
<author>
<name sortKey="Adelson, Eh" uniqKey="Adelson E">EH Adelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goldreich, D" uniqKey="Goldreich D">D Goldreich</name>
</author>
<author>
<name sortKey="Peterson, Ma" uniqKey="Peterson M">MA Peterson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
<author>
<name sortKey="Newport, E" uniqKey="Newport E">E Newport</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fiser, J" uniqKey="Fiser J">J Fiser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hunt, Rrh" uniqKey="Hunt R">RRH Hunt</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mitchel, Ad" uniqKey="Mitchel A">AD Mitchel</name>
</author>
<author>
<name sortKey="Weiss, Dj" uniqKey="Weiss D">DJ Weiss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seitz, Ar" uniqKey="Seitz A">AR Seitz</name>
</author>
<author>
<name sortKey="Kim, R" uniqKey="Kim R">R Kim</name>
</author>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V van Wassenhove</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gekas, N" uniqKey="Gekas N">N Gekas</name>
</author>
<author>
<name sortKey="Chalk, M" uniqKey="Chalk M">M Chalk</name>
</author>
<author>
<name sortKey="Seitz, Ar" uniqKey="Seitz A">AR Seitz</name>
</author>
<author>
<name sortKey="Series, P" uniqKey="Series P">P Seriès</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Series, P" uniqKey="Series P">P Seriès</name>
</author>
<author>
<name sortKey="Seitz, A" uniqKey="Seitz A">A Seitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balzer, J" uniqKey="Balzer J">J Balzer</name>
</author>
<author>
<name sortKey="Peters, Mak" uniqKey="Peters M">MAK Peters</name>
</author>
<author>
<name sortKey="Soatto, S" uniqKey="Soatto S">S Soatto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soatto, S" uniqKey="Soatto S">S Soatto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, J" uniqKey="Ross J">J Ross</name>
</author>
<author>
<name sortKey="Di Lollo, V" uniqKey="Di Lollo V">V di Lollo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, He" uniqKey="Ross H">HE Ross</name>
</author>
<author>
<name sortKey="Reschke" uniqKey="Reschke">Reschke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sanborn, An" uniqKey="Sanborn A">AN Sanborn</name>
</author>
<author>
<name sortKey="Mansinghka, Vk" uniqKey="Mansinghka V">VK Mansinghka</name>
</author>
<author>
<name sortKey="Griffiths, Tl" uniqKey="Griffiths T">TL Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lilliefors, Hw" uniqKey="Lilliefors H">HW Lilliefors</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frayman, B" uniqKey="Frayman B">B Frayman</name>
</author>
<author>
<name sortKey="Dawson, W" uniqKey="Dawson W">W Dawson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benjamini, Y" uniqKey="Benjamini Y">Y Benjamini</name>
</author>
<author>
<name sortKey="Hochberg, Y" uniqKey="Hochberg Y">Y Hochberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benjamini, Y" uniqKey="Benjamini Y">Y Benjamini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brayanov, J" uniqKey="Brayanov J">J Brayanov</name>
</author>
<author>
<name sortKey="Smith, Ma" uniqKey="Smith M">MA Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Beltzner, Ma" uniqKey="Beltzner M">MA Beltzner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grandy, M" uniqKey="Grandy M">M Grandy</name>
</author>
<author>
<name sortKey="Westwood, Da" uniqKey="Westwood D">DA Westwood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geisler, Ws" uniqKey="Geisler W">WS Geisler</name>
</author>
<author>
<name sortKey="Perry, Js" uniqKey="Perry J">JS Perry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, O" uniqKey="Schwartz O">O Schwartz</name>
</author>
<author>
<name sortKey="Hsu, A" uniqKey="Hsu A">A Hsu</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P Dayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buckingham, G" uniqKey="Buckingham G">G Buckingham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buckingham, G" uniqKey="Buckingham G">G Buckingham</name>
</author>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hadjiosif, Am" uniqKey="Hadjiosif A">AM Hadjiosif</name>
</author>
<author>
<name sortKey="Smith, Ma" uniqKey="Smith M">MA Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Craje, C" uniqKey="Craje C">C Crajé</name>
</author>
<author>
<name sortKey="Santello, M" uniqKey="Santello M">M Santello</name>
</author>
<author>
<name sortKey="Gordon, Am" uniqKey="Gordon A">AM Gordon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Westling, G" uniqKey="Westling G">G Westling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Am" uniqKey="Gordon A">AM Gordon</name>
</author>
<author>
<name sortKey="Forssberg, H" uniqKey="Forssberg H">H Forssberg</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Eliasson, Ac" uniqKey="Eliasson A">AC Eliasson</name>
</author>
<author>
<name sortKey="Westling, G" uniqKey="Westling G">G Westling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chapentier, A" uniqKey="Chapentier A">A Chapentier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baugh, La" uniqKey="Baugh L">LA Baugh</name>
</author>
<author>
<name sortKey="Kao, M" uniqKey="Kao M">M Kao</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Anderson, N" uniqKey="Anderson N">N Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cross, D V" uniqKey="Cross D">D V Cross</name>
</author>
<author>
<name sortKey="Rotkin, L" uniqKey="Rotkin L">L Rotkin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, I" uniqKey="Huang I">I Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, J" uniqKey="Stevens J">J Stevens</name>
</author>
<author>
<name sortKey="Rubin, Ll" uniqKey="Rubin L">LL Rubin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodale, Ma" uniqKey="Goodale M">MA Goodale</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25768977</article-id>
<article-id pub-id-type="pmc">4358826</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0119794</article-id>
<article-id pub-id-type="publisher-id">PONE-D-14-41239</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations</article-title>
<alt-title alt-title-type="running-head">Density Natural Statistics Drive Weight Expectations</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Peters</surname>
<given-names>Megan A. K.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref rid="cor001" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Balzer</surname>
<given-names>Jonathan</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Shams</surname>
<given-names>Ladan</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff003">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, California, 90095–1563, United States of America</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Department of Computer Science, University of California Los Angeles, 4732 Boelter Hall, Los Angeles, California, 90095, United States of America</addr-line>
</aff>
<aff id="aff003">
<label>3</label>
<addr-line>Department of Bioengineering, University of California Los Angeles, 420 Westwood Plaza, 5121 Engineering V, Los Angeles, California, 90095–1600, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Malo</surname>
<given-names>Jesus</given-names>
</name>
<role>Academic Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>Universitat de Valencia, SPAIN</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: MAKP LS. Performed the experiments: MAKP. Analyzed the data: MAKP. Contributed reagents/materials/analysis tools: JB. Wrote the paper: MAKP LS.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>meganakpeters@ucla.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>3</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>10</volume>
<issue>3</issue>
<elocation-id>e0119794</elocation-id>
<history>
<date date-type="received">
<day>12</day>
<month>9</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>1</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-year>2015</copyright-year>
<copyright-holder>Peters et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0119794.pdf"></self-uri>
<abstract>
<p>If one nondescript object’s volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers’ predictions about objects’ weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, “twice as large” doesn’t mean “twice as heavy”: Smaller objects are typically denser, following a power function of volume. Interestingly, this “smaller is denser” relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions
<italic>quantitatively</italic>
match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems.</p>
</abstract>
<funding-group>
<funding-statement>This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. M. A. K. P. was supported by the National Science Foundation Graduate Research Fellowship Program. L. S. was supported by National Science Foundation grant BCS-1057625. This research was also supported in part by ongoing Office of Naval Research (ONR) N00014-13-1-0563 and Air Force Research Laboratory (AFRL) FA8650-11-1-7156:P00004. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="5"></fig-count>
<table-count count="3"></table-count>
<page-count count="15"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>Even advanced artificial systems cannot grasp and lift objects with ‘human-like’ ease and dexterity. Theories of sensorimotor processing recognize that our ability rests not just on fast or precise sensory responses to errors, but also on our accurate and precise
<italic>predictions of</italic>
the sensory consequences of motor commands [
<xref rid="pone.0119794.ref001" ref-type="bibr">1</xref>
<xref rid="pone.0119794.ref003" ref-type="bibr">3</xref>
]. Yet what is the basis for predictions about an object’s weight? Often, we use visual information about size, shape, and material (density), as well as memory of previous lifts [
<xref rid="pone.0119794.ref004" ref-type="bibr">4</xref>
<xref rid="pone.0119794.ref009" ref-type="bibr">9</xref>
]. Yet if an object’s material is uninformative and it has never been lifted before, is a visual estimate of size and shape enough to predict weight correctly? While it is known that human observers expect larger objects to be heavier than smaller ones [
<xref rid="pone.0119794.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0119794.ref008" ref-type="bibr">8</xref>
<xref rid="pone.0119794.ref010" ref-type="bibr">10</xref>
], the quantitative precision of this estimation is unclear. Do we use a simple heuristic—e.g., that an object with twice the volume should be twice as heavy—or is a more complex calculation involved? We aimed to explore whether knowledge of environmental statistics may play a role in weight prediction.</p>
<p>A wealth of data demonstrates that humans are sensitive to statistical environmental properties: Light generally comes from above, leading to strong perceptions of convexity in shaded 2D objects [
<xref rid="pone.0119794.ref011" ref-type="bibr">11</xref>
]; motion in the world is typically slow and smooth, biasing humans’ visual estimates of speed under uncertain conditions [
<xref rid="pone.0119794.ref012" ref-type="bibr">12</xref>
,
<xref rid="pone.0119794.ref013" ref-type="bibr">13</xref>
]; environmental distributions of contour orientation cluster around cardinal directions, biasing perception [
<xref rid="pone.0119794.ref014" ref-type="bibr">14</xref>
]; and human observers are also biased to perceive that objects are convex and background colors are homogenous [
<xref rid="pone.0119794.ref015" ref-type="bibr">15</xref>
] due to regular patterns in these environmental properties. Thus, it is clear that the human brain can maintain representations of environmental regularities.</p>
<p>How are such representations obtained? Many studies have demonstrated that pure exposure to statistical regularities (e.g., regular pairing of sensory stimuli) in an experimental environment can lead to learning [
<xref rid="pone.0119794.ref016" ref-type="bibr">16</xref>
<xref rid="pone.0119794.ref018" ref-type="bibr">18</xref>
]. Recent studies have also begun to explore how humans can learn simultaneous, independent statistics of multimodal inputs [
<xref rid="pone.0119794.ref019" ref-type="bibr">19</xref>
] as well as cross-modal associations between audio-visual cues [
<xref rid="pone.0119794.ref020" ref-type="bibr">20</xref>
]. However, although participants can learn joint statistical properties of two simultaneous distributions within the visual modality [
<xref rid="pone.0119794.ref021" ref-type="bibr">21</xref>
], it remains unclear how the brain may represent a distribution of the
<italic>co-occurrence</italic>
of
<italic>crossmodal</italic>
environmental properties [
<xref rid="pone.0119794.ref022" ref-type="bibr">22</xref>
]. And although it has been suggested that humans can learn qualitative statistics of object weights and sizes in an artificial setting [
<xref rid="pone.0119794.ref006" ref-type="bibr">6</xref>
], the degree to which the brain can extract an abstract representation of the typical link between object size and weight through heterogeneous everyday experience is not known. It is also unknown whether any such representations might contain quantitative features. We hypothesized that, given the statistical learning abilities of the brain shown in other contexts, there may be a similar learning phenomenon operating here that would extract the relationship between volume and weight for the objects humans regularly lift and manipulate and make it available to the perceptual system, even before an object is lifted.</p>
</sec>
<sec sec-type="materials|methods" id="sec002">
<title>Materials and Methods</title>
<sec id="sec003">
<title>Environmental data collection</title>
<p>To identify the true relationship between volume and weight in everyday environments, three datasets of artificial, liftable objects were collected. For Dataset 1 in
<xref rid="pone.0119794.s002" ref-type="supplementary-material">S1 Dataset</xref>
, using a ruler and basic geometry, we estimated the volume of a set of 43 objects selected randomly from everyday home and office environments. Examples of objects used include computer mice, smartphones, shoes, coffee mugs, staplers, cooking utensils, packaged food items, and personal care items such as soap and shampoo. In the interest of efficiency, we next turned to a coarser measure to supplement the objects we had sampled from homes and the office environment, seeking out basic product dimensions (length, width, height, and weight) available on online shopping sites such as Amazon.com and other online retailers. Such coarse information was collected for 124 household objects and made up Dataset 2 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
. Although this method of measurement is coarse, it allowed us to sample a much broader set of objects than would have been possible had we been restricted to our own homes and offices, and without necessitating purchasing, shipping or transporting, and storing the items.</p>
<p>Finally, to gain a more precise estimate of volume than what is provided by tape measurements or online surveys, we developed a custom software package. Video and point-depth estimates captured by a Carmine Primesense 1.09 depth sensor were fed into a depth-estimation algorithm and used to produce a mesh grid virtual representation of 28 man-made household objects [
<xref rid="pone.0119794.ref023" ref-type="bibr">23</xref>
]. These objects’ volumes were calculated from the mesh grid virtual representations through our custom software written in Qt Creator and Matlab, and used to generate Dataset 3 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
[
<xref rid="pone.0119794.ref023" ref-type="bibr">23</xref>
]. We applied the same method to generate Dataset 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
, which consisted of 28 natural objects, such as fruits, vegetables, and objects found in the outdoors (e.g., pinecones).</p>
<p>The method developed enables precise measurements of the volume of everyday objects in a user-friendly, inexpensive manner. Note that such objects often exhibit complex geometry, topology, and photometry, thus precluding the use of off-the-shelf laser scanners (due to specular reflections); volume displacement techniques, e.g., submerging objects in water, cannot be easily employed as many objects either float (e.g., apples), absorb water (e.g., cardboard packaging for foodstuffs, stuffed animals), or are permanently damaged by water (e.g., hand-held consumer electronics). Further, we wished to measure volume in a manner as analogous as possible to the way in which humans do so without access to haptic information, i.e., on the basis of visual information alone [
<xref rid="pone.0119794.ref024" ref-type="bibr">24</xref>
]. For example, visual inspection prior to lifting would provide no information about internal cavities (as in hollow or porous objects). Thus, we applied these state-of-the-art Computer Vision algorithms to produce 3-D models of everyday man-made (Dataset 3 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
) and naturally-occurring (Dataset 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
) objects [
<xref rid="pone.0119794.ref023" ref-type="bibr">23</xref>
] and calculated their volumes and densities. Our method has been tested and validated, and produces an average relative volume error of-0.34%, making it both accurate and precise [
<xref rid="pone.0119794.ref023" ref-type="bibr">23</xref>
]. The software is freely available for download at
<ext-link ext-link-type="uri" xlink:href="https://bitbucket.org/jbalzer/yas/wiki/Home">https://bitbucket.org/jbalzer/yas/wiki/Home</ext-link>
.</p>
<p>For all objects in Datasets 1 in
<xref rid="pone.0119794.s002" ref-type="supplementary-material">S1 Dataset</xref>
, Datasets 3 and 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
, objects’ weight was measured to 0.1g precision using an electronic scale (American Weigh). A final dataset was constructed via online survey as for Dataset 2 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
(i.e., gathering length, width, height, and weight as reported on product pages) for 28 artificial but
<italic>unliftable</italic>
objects, such as large furniture, large household appliances, and vehicles (Dataset 5 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
).</p>
</sec>
<sec id="sec004">
<title>Perceptual Experiment</title>
<p>
<bold>Human subjects.</bold>
Twenty individuals (mean age: 19.9 years, range: 18–25 years, 3 men, 16 right-handed) gave written informed consent to participate in the perceptual portion of this study. All participants had normal or corrected-to-normal vision and normal hearing.</p>
<p>
<bold>Ethics statement.</bold>
This experiment was conducted in accordance with the Declaration of Helsinki and approved by the UCLA Institutional Review Board.</p>
<p>
<bold>Materials.</bold>
Experimental stimuli consisted of twelve objects: three sets of four objects possessing identical volume ratios. From small to large, these objects will be referred to as objects A, B (3.375 times A’s volume), C (8 times A’s volume), and D (27 times A’s volume). The Blob set consisted of four identically-shaped blobs spray-painted blue, with volumes of 111.63, 376.75, 893.03, and 3013.98 cm
<sup>3</sup>
, respectively. The Greeble set consisted of four identically-shaped greebles spray-painted green, with volumes of 65.72, 221.80, 525.75, and 1774.41 cm
<sup>3</sup>
, respectively. The Blob and Greeble sets were constructed via 3-D printing out of a plaster-like substance. The Cube set consisted of four cubic objects constructed out of tagboard and covered in balsa wood, with volumes of 131.10, 442.45, 1048.77, and 3539.61 cm
<sup>3</sup>
, respectively. All objects were hollow. Objects not in use on a given trial remained hidden behind a black curtain; the experimenter also remained hidden from view.</p>
<p>
<bold>Perceptual task procedure.</bold>
Subjects were randomly assigned to one of two groups: The Expected Weight (EW) group was given instructions to report their expectation about weight, while the Perceived Volume (PV) group was given instructions to report their perception of volume. Groups were comparable in terms of demographic composition. On each trial, objects were presented two at a time, placed side by side in front of the participant on a black cloth (so as to dampen any sounds associated with their placement that might be used as cues to density). The object to the participant’s left was given a reference value [
<xref rid="pone.0119794.ref025" ref-type="bibr">25</xref>
,
<xref rid="pone.0119794.ref026" ref-type="bibr">26</xref>
] of 10 units (units of weight for the EW group, and units of volume for the PV group), and the subject was instructed to verbally report his expectation regarding the object on the right, in the form of a ratio referencing the left object’s value of 10 units. For example, if a small object was presented on the left, and a larger one on the right, a subject in the EW group might say “20” if he believed the larger object should weigh twice as much as the smaller; a subject in the PV group might say “30” if he believed the larger object possessed three times the volume of the smaller; and so on. Likewise, if the right object was smaller than the left, a subject might say “5” to indicate his belief that the right object possessed half the volume or weight as the left one. Subjects were instructed to provide this report without touching, lifting, or moving the objects in any way.</p>
<p>Objects were presented in a full factorial design, including all six possible combinations of the four sizes for each object. Thus, the possible pairings within each object were: A:B, A:C, A:D, B:C, B:D, C:D, (small/left—large/right, S-L); and B:A, C:A, D:A, C:B, D:B, D:C (large/left—small/right, L-S). Subjects completed 10 practice trials, followed by 144 test trials (10 trials of each S-L pairing, 10 trials of each L-S pairing) in pseudorandomized order. No feedback was given. While the experimenter was placing or removing the objects, subjects in both groups were required to close their eyes so as to avoid any cueing effects regarding the possible weight of the objects. The experimenter monitored compliance with all instructions through a small slit in the black curtain.</p>
<p>For analysis, we collapsed across S-L and L-S orderings within an object type; for example, data from the A:C and C:A conditions were pooled for each subject to create a single dataset representing this pair of objects, regardless of presentation placement.</p>
</sec>
<sec id="sec005">
<title>Statistical analyses</title>
<p>All analyses for both environmental and perceptual data were carried out through the use of the Matlab software (Version 7.10.0) with the Statistics Toolbox and the SPSS Statistics software (Version 20.0.0). Means and standard deviations were calculated after taking the natural log transform of each data point to restore linearity in responses, as responses were made as ratios, which are distributed nonlinearly. For some plots and tables, data are transformed back into ratio form for ease of interpretation. Sample size of n = 10 in each group was determined to be sufficient given the identification of a medium effect size (Cohen’s
<italic>d</italic>
) in a pilot experiment; after reaching n = 10 in each group, data collection was terminated. All data are available for download as Supplemental Material.</p>
</sec>
</sec>
<sec sec-type="results" id="sec006">
<title>Results</title>
<sec id="sec007">
<title>Environmental object data</title>
<p>True volume and weight data was collected for 195 liftable, man-made, everyday objects, and used to calculate their density:
<italic>d = w/V</italic>
where
<italic>d</italic>
is density,
<italic>w</italic>
is weight, and
<italic>V</italic>
is volume. Technically,
<italic>d = m/V</italic>
, where
<italic>m</italic>
is mass; however, because
<italic>w = m × a</italic>
, where
<italic>a</italic>
is acceleration (in this case, acceleration is due to gravity, which is constant), and because weight and mass are used interchangeably in everyday discourse, weight is used as a functional equivalent to mass in this experiment. We used the property of density because it is defined as the very relationship we were interested in (that between volume and weight) and density estimation is often mentioned as a crucial factor in preparation for lifting objects [
<xref rid="pone.0119794.ref005" ref-type="bibr">5</xref>
]. In contrast to predictions of independence between volume and density (
<xref rid="pone.0119794.g001" ref-type="fig">Fig. 1a</xref>
), a power function relationship between volume and density was observed for the man-made object datasets (Dataset 1 in
<xref rid="pone.0119794.s002" ref-type="supplementary-material">S1 Dataset</xref>
and Dataset 2 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
and Dataset 3 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
) (
<xref rid="pone.0119794.g001" ref-type="fig">Fig. 1b</xref>
), so a log transform was computed to reveal the nature of the inverse correlation between volume and density for each of the three man-made object datasets (R
<sub>1</sub>
= -.4673, p = .002; R
<sub>2</sub>
= -.6290, p << .001; R
<sub>3</sub>
= -.7917, p << .001), as well as the pooled man-made object data (R = -.5721, p << .001) (
<xref rid="pone.0119794.g001" ref-type="fig">Fig. 1c</xref>
). To compare directly between artificial and natural objects, the same calculation was also performed for a dataset of natural, liftable objects (Dataset 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
) (R
<sub>4</sub>
= -.0048, p = .981), but revealed no significant relationship between volume and density (
<xref rid="pone.0119794.g002" ref-type="fig">Fig. 2a and 2b</xref>
). A final comparison between a randomly-selected subset (n = 28) of objects in Dataset 2 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
(liftable artificial object statistics garnered from online retailers) and a set of
<italic>unliftable</italic>
artificial objects with dimensions and weight data collected in the same manner (Dataset 5 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
) also revealed the persistence of the inverse correlation for the subset of liftable artificial objects (R
<sub>2,subset</sub>
= -.8390, p << .001) but not the unliftable ones (R
<sub>5</sub>
= .0396, p = .8416) (
<xref rid="pone.0119794.g002" ref-type="fig">Fig. 2c and 2d</xref>
). Thus, these data revealed that, for liftable man-made objects, density is distributed not uniformly, but instead as a power function of volume: Smaller liftable artificial objects are denser than larger ones, and by more so the smaller they are. This relationship does not hold for natural objects or unliftable man-made objects. (See also
<xref rid="pone.0119794.s001" ref-type="supplementary-material">S1 Fig</xref>
. for non-log-transformed data.)</p>
<fig id="pone.0119794.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Environmental data.</title>
<p>‘Uniform distribution’ predictions (a) differ markedly from the observed power function relationship between volume and density (b). For ease of viewing, (c) displays the natural log-transformed scatterplot of the power function relationship between volume and density for man-made objects, showing a significant inverse correlation between log volume and log density.</p>
</caption>
<graphic xlink:href="pone.0119794.g001"></graphic>
</fig>
<fig id="pone.0119794.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Comparison of density distribution for natural, artificial, liftable, and unliftable objects.</title>
<p>(a) 3-D scanned liftable artificial objects (Dataset 3 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
, n = 28) show a significant inverse correlation between log volume and density, while (b) 3-D scanned natural objects (Dataset 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
, n = 28) show no such relationship. Likewise, (c) a subset of randomly-selected objects from the liftable artificial objects collected via online survey (random subset of Dataset 2 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
, n = 28) also demonstrate this significant inverse correlation, but (d) artificial but
<italic>unliftable</italic>
objects collected via online survey (Dataset 5 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
, n = 28) show no correlation.</p>
</caption>
<graphic xlink:href="pone.0119794.g002"></graphic>
</fig>
</sec>
<sec id="sec008">
<title>Perceptual Experiment</title>
<p>Participants were shown pairs of similarly-shaped but differently-sized objects, and asked to judge their weight ratio (Expected Weight group; EW) or volume ratio (Perceived Volume; PV group) (See
<xref rid="sec002" ref-type="sec">Materials and Methods</xref>
). If the brain had no knowledge or representation of the environmental statistic linking objects’ weights and densities to their size, answers from participants in the two groups should be identical, on average: Without density information, two objects’ weight ratio should simply be their volume ratio. A difference between group answers would indicate that observers are relying on additional information about objects’ densities to form their weight expectation judgments.</p>
<p>Due to the nature of the dependent measure as a ratio for the Expected Weight (EW) and Perceived Volume (PV) groups (and in keeping with studies on relative mass in intuitive physics [
<xref rid="pone.0119794.ref027" ref-type="bibr">27</xref>
]), the natural log transform of each data-point was computed, as was the mean log ratio for each subject for each cube pair. Normality of each of these resultant datasets was then assessed through the Lilliefors test [
<xref rid="pone.0119794.ref028" ref-type="bibr">28</xref>
]—an adaptation of the Kolmogorov-Smirnov one-sample test that allows for testing the null hypothesis that data come from a normally distributed population without the need to specify the expected value and variance of the null hypothesis test distribution. No distributions failed these normality tests.</p>
<p>Consistent with previous studies [
<xref rid="pone.0119794.ref029" ref-type="bibr">29</xref>
], PV ratios did not approach true volume ratios, indicating consistent underestimation of volume (two-tailed t-tests against 0: t
<sub>Blobs</sub>
= 4.766, p << .001; t
<sub>Greebles</sub>
= 5.4994, p << .001; t
<sub>Cubes</sub>
= 5.1265, p << .001). We next conducted a 2 (condition: EW vs. PV) x 3 (object type: Blobs, Greebles, Cubes) x 6 (pair: A:B, A:C, A:D, B:C, B:D, C:D) mixed design ANOVA. This analysis revealed a main effect of condition (F(1,18) = 7.542, p = .013) and pair (F(5,90) = 334.179, p < .001), and an interaction between condition and pair (F(5,90) = 3.334, p = .008), but no other significant effects (p > 0.05). The main effect of condition indicates that participants in the EW group consistently reported larger ratios than did participants in the PV group; the direction of this effect indicates that observers believed the smaller objects to be denser than the larger objects—
<italic>over and above the typical underestimations of volume</italic>
—which qualitatively matches the statistics of the environment. The main effect of pair indicates that participants reported different EW and PV ratios for the pairs of objects, and the interaction indicates that the degree to which EW ratios were larger than PV ratios varies by pair (Figs.
<xref rid="pone.0119794.g003" ref-type="fig">3</xref>
and
<xref rid="pone.0119794.g004" ref-type="fig">4</xref>
).</p>
<fig id="pone.0119794.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.g003</object-id>
<label>Fig 3</label>
<caption>
<title>Human observers’ data, by condition and object type.</title>
<p>Participants’ reported PV ratios are consistently smaller than EW responses, indicating a belief that smaller objects are denser than larger ones. Consistent with previous studies, PV consistently underestimates true volume, leading to PV responses larger than the true volume ratio between the objects (gray vertical line). EW ratios are consistently larger than PV ratios, indicating that subjects believe smaller objects are denser than larger objects, over and above any mis-estimation of volume.</p>
</caption>
<graphic xlink:href="pone.0119794.g003"></graphic>
</fig>
<fig id="pone.0119794.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Human observers’ data, summary.</title>
<p>(a) As before, EW and PV responses for each object type by pair show the “smaller is denser” belief, with EW responses consistently larger than PV responses. (b) Error in estimates (PV—true volume and EW—true volume) collapsed across all pairs demonstrates the effect of condition: EW ratios are larger than PW ratios, and thus display more error in comparison to true volume ratios. Error bars represent standard deviation of responses. The x-axis represents error in estimation of volume/weight.</p>
</caption>
<graphic xlink:href="pone.0119794.g004"></graphic>
</fig>
<p>To further explore the interaction between pair and condition, we conducted six additional post-hoc 2 (condition: EW vs. PV) x 3 (object type: Blobs, Greebles, Cubes) mixed design ANOVAs, one for each object pair, to assess the degree to which the belief that smaller items are denser than larger ones persists for all pairs. Correction for multiple comparisons was accomplished through the False Discovery Rate method [
<xref rid="pone.0119794.ref030" ref-type="bibr">30</xref>
,
<xref rid="pone.0119794.ref031" ref-type="bibr">31</xref>
], which indicated that the expected percent of false predictions would be less than 0.2% for each of these six tests (
<xref rid="pone.0119794.t001" ref-type="table">Table 1</xref>
). This result indicates the belief that smaller objects are denser than larger objects exists for all pairs individually, and that it is not one or two individual pairs that drive the overall main effect. No other significant effects were detected with these post-hoc tests. We also measured effect size (Cohen’s
<italic>d</italic>
) for each pair, collapsing across object set. This analysis revealed that effect size grew roughly with increasing dissimilarity between the two object volumes:
<italic>d</italic>
<sub>
<italic>A</italic>
:
<italic>B</italic>
</sub>
= 1.0480,
<italic>d</italic>
<sub>
<italic>A</italic>
:
<italic>C</italic>
</sub>
= 0.9780,
<italic>d</italic>
<sub>
<italic>A</italic>
:
<italic>D</italic>
</sub>
= 1.2134,
<italic>d</italic>
<sub>
<italic>B</italic>
:
<italic>C</italic>
</sub>
= 0.9816,
<italic>d</italic>
<sub>
<italic>B</italic>
:
<italic>D</italic>
</sub>
= 1.0037,
<italic>d</italic>
<sub>
<italic>C</italic>
:
<italic>D</italic>
</sub>
= 1.1835. All of these effect sizes are considered large. We also measured the effect size for each object set collapsing across pair, which revealed the Greebles object set effect size (
<italic>d</italic>
<sub>
<italic>Greebles</italic>
</sub>
= .4059) to be smaller than the other two object sets (
<italic>d</italic>
<sub>
<italic>Blobs</italic>
</sub>
= .5210,
<italic>d</italic>
<sub>
<italic>Cubes</italic>
</sub>
= .5819).</p>
<table-wrap id="pone.0119794.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.t001</object-id>
<label>Table 1</label>
<caption>
<title>Results of six post-hoc mixed design ANOVAs.</title>
</caption>
<alternatives>
<graphic id="pone.0119794.t001g" xlink:href="pone.0119794.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Pair</th>
<th align="left" rowspan="1" colspan="1">F
<sub>Condition</sub>
</th>
<th align="left" rowspan="1" colspan="1">p</th>
<th align="left" rowspan="1" colspan="1">FDR</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:B</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">5.945</td>
<td align="char" char="." rowspan="1" colspan="1">0.025</td>
<td align="char" char="." rowspan="1" colspan="1">0.0017</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">5.389</td>
<td align="char" char="." rowspan="1" colspan="1">0.032</td>
<td align="char" char="." rowspan="1" colspan="1">0.0013</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">8.441</td>
<td align="char" char="." rowspan="1" colspan="1">0.009</td>
<td align="char" char="." rowspan="1" colspan="1">0.0019</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">4.796</td>
<td align="char" char="." rowspan="1" colspan="1">0.042</td>
<td align="char" char="." rowspan="1" colspan="1">0.0015</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">5.441</td>
<td align="char" char="." rowspan="1" colspan="1">0.031</td>
<td align="char" char="." rowspan="1" colspan="1">0.0016</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>C:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">7.854</td>
<td align="char" char="." rowspan="1" colspan="1">0.012</td>
<td align="char" char="." rowspan="1" colspan="1">0.0013</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>Results demonstrate that the belief that smaller items are denser than larger ones exists for all pairs of objects in our experiment. Correction for multiple comparisons through use of the False Discovery Rate method indicates that false discovery is highly improbable, at less than 0.2% for each of the six tests.</p>
</sec>
<sec id="sec009">
<title>Comparison of environmental and perceptual data</title>
<p>Finally, we sought to assess the degree of
<italic>quantitative</italic>
agreement between the environmental and perceptual (EW) data in order to determine the nature of the representation our participants were using. Data were pooled from all liftable artificial object environmental datasets (Datasets 1–3 in
<xref rid="pone.0119794.s002" ref-type="supplementary-material">S1</xref>
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Datasets</xref>
), and a full factorial combination set of all volumes of all artificial objects was created. We then selected the half of the full factorial combination set for which
<italic>V</italic>
<sub>
<italic>object1</italic>
</sub>
<italic>< V</italic>
<sub>
<italic>object2</italic>
</sub>
, e.g. cases where
<italic>object 1</italic>
:
<italic>"9V battery"</italic>
,
<italic>object 2</italic>
:
<italic>"orange"</italic>
and not
<italic>object 1</italic>
:
<italic>"orange"</italic>
,
<italic>object 2</italic>
:
<italic>"9V battery"</italic>
.</p>
<p>Next we computed the true
<italic>small/large</italic>
ratio for weight (
<italic>w</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/w</italic>
<sub>
<italic>L</italic>
</sub>
) and density (
<italic>d</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/d</italic>
<sub>
<italic>L</italic>
</sub>
) for each of these small-large object pairs, and, due to the identified power function relationships, computed their natural log transforms. Linear trends to the log environmental object weight ratio (WR) and density ratio (DR) data were fitted as a function of log volume ratios (VR) (WR = .613VR + .114, DR = -.387VR + .114) (
<xref rid="pone.0119794.g005" ref-type="fig">Fig. 5</xref>
). These linear trends were subsequently used to calculate the average log weight and density ratios for each of the volume ratios used in the perceptual experiment (Tables
<xref rid="pone.0119794.t002" ref-type="table">2</xref>
and
<xref rid="pone.0119794.t003" ref-type="table">3</xref>
).</p>
<fig id="pone.0119794.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.g005</object-id>
<label>Fig 5</label>
<caption>
<title>Comparison between environmental object data and observers’ data.</title>
<p>Overlay of natural log-transformed environmental and observers’ expected (a) weight (EW) ratios and (b) density ratios as a function of volume ratios for the three object types shows agreement between environmental data and participants’ predictions of objects’ weight (and thus density) relationships. Error bars denote standard deviation across participants’ responses.</p>
</caption>
<graphic xlink:href="pone.0119794.g005"></graphic>
</fig>
<table-wrap id="pone.0119794.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.t002</object-id>
<label>Table 2</label>
<caption>
<title>Predicted weight ratios derived from line of best fit to environmental object data for the six volume ratios presented experimentally.</title>
</caption>
<alternatives>
<graphic id="pone.0119794.t002g" xlink:href="pone.0119794.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Pair</th>
<th align="left" rowspan="1" colspan="1">Predicted environmental
<italic>w</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/w</italic>
<sub>
<italic>L</italic>
</sub>
</th>
<th align="left" rowspan="1" colspan="1">EW ratios (Cubes)</th>
<th align="left" rowspan="1" colspan="1">EW ratios (Blobs)</th>
<th align="left" rowspan="1" colspan="1">EW ratios (Greebles)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:B</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.5314</td>
<td align="char" char="." rowspan="1" colspan="1">0.5325</td>
<td align="char" char="." rowspan="1" colspan="1">0.4980</td>
<td align="char" char="." rowspan="1" colspan="1">0.4654</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.3129</td>
<td align="char" char="." rowspan="1" colspan="1">0.2867</td>
<td align="char" char="." rowspan="1" colspan="1">0.2621</td>
<td align="char" char="." rowspan="1" colspan="1">0.2520</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.1484</td>
<td align="char" char="." rowspan="1" colspan="1">0.1282</td>
<td align="char" char="." rowspan="1" colspan="1">0.1337</td>
<td align="char" char="." rowspan="1" colspan="1">0.1302</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.6600</td>
<td align="char" char="." rowspan="1" colspan="1">0.6027</td>
<td align="char" char="." rowspan="1" colspan="1">0.5962</td>
<td align="char" char="." rowspan="1" colspan="1">0.5769</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.3129</td>
<td align="char" char="." rowspan="1" colspan="1">0.2895</td>
<td align="char" char="." rowspan="1" colspan="1">0.2564</td>
<td align="char" char="." rowspan="1" colspan="1">0.2402</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>C:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">0.5314</td>
<td align="char" char="." rowspan="1" colspan="1">0.4816</td>
<td align="char" char="." rowspan="1" colspan="1">0.4478</td>
<td align="char" char="." rowspan="1" colspan="1">0.4992</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="pone.0119794.t003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0119794.t003</object-id>
<label>Table 3</label>
<caption>
<title>Predicted density ratios derived as in
<xref rid="pone.0119794.t002" ref-type="table">Table 2</xref>
.</title>
</caption>
<alternatives>
<graphic id="pone.0119794.t003g" xlink:href="pone.0119794.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Pair</th>
<th align="left" rowspan="1" colspan="1">Predicted environmental
<italic>d</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/d</italic>
<sub>
<italic>L</italic>
</sub>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>d</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/d</italic>
<sub>
<italic>L</italic>
</sub>
(Cubes)</th>
<th align="left" rowspan="1" colspan="1">
<italic>d</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/d</italic>
<sub>
<italic>L</italic>
</sub>
(Blobs)</th>
<th align="left" rowspan="1" colspan="1">
<italic>d</italic>
<sub>
<italic>S</italic>
</sub>
<italic>/d</italic>
<sub>
<italic>L</italic>
</sub>
(Greebles)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:B</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">1.7933</td>
<td align="char" char="." rowspan="1" colspan="1">1.7972</td>
<td align="char" char="." rowspan="1" colspan="1">1.6809</td>
<td align="char" char="." rowspan="1" colspan="1">1.5706</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">2.5036</td>
<td align="char" char="." rowspan="1" colspan="1">2.2938</td>
<td align="char" char="." rowspan="1" colspan="1">2.0967</td>
<td align="char" char="." rowspan="1" colspan="1">2.0157</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">4.0068</td>
<td align="char" char="." rowspan="1" colspan="1">3.4620</td>
<td align="char" char="." rowspan="1" colspan="1">3.6092</td>
<td align="char" char="." rowspan="1" colspan="1">3.5146</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:C</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">1.5643</td>
<td align="char" char="." rowspan="1" colspan="1">1.4286</td>
<td align="char" char="." rowspan="1" colspan="1">1.4133</td>
<td align="char" char="." rowspan="1" colspan="1">1.3675</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">2.5036</td>
<td align="char" char="." rowspan="1" colspan="1">2.3163</td>
<td align="char" char="." rowspan="1" colspan="1">2.0514</td>
<td align="char" char="." rowspan="1" colspan="1">1.9218</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>C:D</bold>
</td>
<td align="char" char="." rowspan="1" colspan="1">1.7933</td>
<td align="char" char="." rowspan="1" colspan="1">1.6254</td>
<td align="char" char="." rowspan="1" colspan="1">1.5113</td>
<td align="char" char="." rowspan="1" colspan="1">1.6849</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>Finally, to compare these ratios with perceptual data, we calculated the expected density ratio for each EW data-point by again using the relationship
<italic>d = w/V</italic>
:
<disp-formula id="pone.0119794.e001">
<alternatives>
<graphic xlink:href="pone.0119794.e001.jpg" id="pone.0119794.e001g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M1">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>*</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfrac bevelled="true">
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mfrac bevelled="true">
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>*</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>V</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</alternatives>
<label>(1)</label>
</disp-formula>
</p>
<p>As the volume measurements for the environmental objects database are true volumes, true volume (as opposed to perceived volume) was used for these calculations as well. To compare to the environmental data, we computed the natural log transform of the resulting weight and density ratios. Results of this set of analyses are shown in Tables
<xref rid="pone.0119794.t002" ref-type="table">2</xref>
and
<xref rid="pone.0119794.t003" ref-type="table">3</xref>
, transformed back into ratio space for ease of interpretation. Surprisingly, predicted weight ratios closely mirror true weight ratios in the environment (
<xref rid="pone.0119794.g005" ref-type="fig">Fig. 5a</xref>
), indicating that the amount by which observers expected a smaller man-made object to be denser than a larger one closely mirrored the average true density asymmetry for a similarly-sized pair of man-made objects in the environment (
<xref rid="pone.0119794.g005" ref-type="fig">Fig. 5b</xref>
). To confirm visual analysis, we computed the linear trends for the weight ratios (WR) and density ratios (DR) predicted from the perceptual experiment. This led to WR
<sub>Blobs</sub>
= .622VR—.007, WR
<sub>Greebles</sub>
= .638VR + .002, WR
<sub>Cubes</sub>
= .646VR + .089, DR
<sub>Blobs</sub>
= -.378VR—.007, DR
<sub>Greebles</sub>
= -.362VR + .002, and DR
<sub>Cubes</sub>
= -.354VR + .089, all of which closely match the calculated lines of best fit for the environmental object data. These findings suggest that the human nervous system is endowed with knowledge of and is able to use the power function relationship between size and density to optimally generate accurate estimates of novel, man-made objects’ weight relationships on the basis of visual size alone, even when other visual cues—such as differential material—and memory are unavailable.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec010">
<title>Discussion</title>
<p>In this study, we report a new environmental regularity: The distribution of liftable artificial object density follows a power function of volume, i.e., weight does not grow linearly with volume for objects that are designed to be liftable and manipulable. Furthermore, this statistical regularity does not appear to exist for natural objects that are liftable; a survey of larger artificial objects such as furniture and vehicles that are not designed to be liftable also did not show this relationship. These findings suggest that physiological constraints on humans’ lifting abilities (that the maximum comfortable liftable size and maximum comfortable liftable weight exist, but that as size increases maximum liftable weight decreases) have resulted in a set of everyday man-made objects that follows this unique power function between volume and density.</p>
<p>This environmental regularity appears to be encoded in the human sensorimotor system and used by the nervous system to predict novel objects’ weight relationships at a perceptually-available level. When shown pairs of novel objects and provided no informative cues to their weight relationship other than their visual appearance and no previous experience lifting any similar objects in the experimental setting, participants consistently and systematically provided weight estimates that indicated they believed smaller objects to be denser than larger objects, over and above any mis-estimations of volume. This effect was strong for the Blob and Cube object sets, but slightly less strong for the Greeble object set. It is likely this difference occurred because either: (a) the Greebles were smaller in volume; or (b) the Greebles possessed unique geometry (e.g., more cylindrical, protruding elements, etc.) in comparison to the other sets. It is also possible that the Greebles may have in part induced use of a prior of natural objects—which possess no regular size-density relationship—given that Greebles are designed to look somewhat animate. Indeed, several participants reported that the Greebles were “cute.” However, despite these possibilities, it still remains that three of the six Greeble pairs induced a significant density bias, and the effect in the remaining three was borderline significant.</p>
<p>Most strikingly, participants’ conscious estimates of a given two experimental objects’ weight relationships
<italic>quantitatively</italic>
match the average weight relationship held by two objects of similar volume relationship sampled from everyday environments: If a pair of objects in the environment displayed a density ratio of 2.5 on average, observers’ reports of expected weight ratio also drew upon an expected density ratio of about the same magnitude, rather than a simple qualitative relationship such as, “The smaller item should be somewhat denser than the larger.” These findings suggest that the human brain has learned quantitative aspects of the nonlinear relationship between size and weight for everyday objects, and can abstract that relationship in the absence of informative cues (e.g. to material) to a set of nondescript, novel objects that are to a certain extent consciously available. Although it has previously been demonstrated that the motor system possesses more quantitative information about a novel object’s weight, it has also been repeatedly found that the motor system and perceptual system are dissociable when it comes to lifting and manipulating objects: Even in the size-weight illusion, motor forces scale very quickly to correctly anticipate the weight of novel objects, and yet reports of weight expectations and weight perception do not [
<xref rid="pone.0119794.ref032" ref-type="bibr">32</xref>
<xref rid="pone.0119794.ref034" ref-type="bibr">34</xref>
]. Interestingly, when we informally yet explicitly asked participants whether they believed the two objects to possess the same density, many appeared confused by the question: Some said they were equally likely to possess equal or unequal density (i.e., 33% likely to have equal density, 33% likely the smaller was denser, 33% likely the larger was denser), while others reported rationalizations such as, “They appear to be made out of the same material, so they probably have the same density.” These comments indicate that although this quantitative “smaller is denser” information is to some extent consciously accessible, it nevertheless remains implicit to a certain degree.</p>
<p>These results first demonstrate that humans’ sensitivity to and use of environmental statistics can be extended to include joint distributions of properties, such as that linking size and weight. However, unlike many of the previously-reported environmental statistic sensitivities, our results additionally demonstrate that the sensorimotor system’s knowledge of the size-weight distribution (i.e., the distribution of density as a function of size) is represented
<italic>quantitatively</italic>
as well as qualitatively. Previous studies have demonstrated qualitative statistical sensitivities, or rules such as “slower and smoother motion is more likely” [
<xref rid="pone.0119794.ref012" ref-type="bibr">12</xref>
,
<xref rid="pone.0119794.ref013" ref-type="bibr">13</xref>
] or “more connected in space and time is more likely” [
<xref rid="pone.0119794.ref035" ref-type="bibr">35</xref>
,
<xref rid="pone.0119794.ref036" ref-type="bibr">36</xref>
]. In fact, humans’ qualitative acquisition of an experimentally-manipulated inversion of the relationship between size and weight has been demonstrated in a statistical learning study [
<xref rid="pone.0119794.ref006" ref-type="bibr">6</xref>
]. The authors presented observers with geometric stimuli uniform in material and color but of varying sizes, which had been constructed such that smaller objects were heavier than larger objects, in opposition to the typical direct relationship between size and weight. Results demonstrated that, with training, subjects’ produced motor forces came to demonstrate knowledge of this relationship: Eventually, subjects applied more grip and load force to smaller objects than to larger ones, suggesting their expectations that the smaller objects would be heavier.</p>
<p>It is important to note, however, that perceptual expectations of weight were not directly collected in this study, instead being inferred from reports of heaviness perception in the size-weight illusion. Because it is not yet settled how exactly heaviness perception depends on perceptual expectations of weight [
<xref rid="pone.0119794.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0119794.ref032" ref-type="bibr">32</xref>
,
<xref rid="pone.0119794.ref037" ref-type="bibr">37</xref>
,
<xref rid="pone.0119794.ref038" ref-type="bibr">38</xref>
], it is difficult to draw specific conclusions about how exactly these perceptual expectations changed as a result of training with small-heavy and large-light objects. Further, this study suggested that the qualitative relationship between size and weight could be learned by experience, the learning was based on a set of objects that were uniform in shape (within a set), color, and material, and only varied in size and weight; it therefore remained unclear whether humans might be capable of such statistical learning in natural environments which pose extreme diversity of stimulus types (e.g., real-world objects). Further, and critically, the motoric force metric used in that study cannot speak to whether subjects learned only the qualitative inverted relationship between size and weight, or whether they learned a more quantitative representation: Recent evidence suggests that grip force, load force, and their first derivatives may reflect not only expectation of heaviness but also uncertainty (i.e., lack of confidence) about one’s expectation [
<xref rid="pone.0119794.ref039" ref-type="bibr">39</xref>
]. Additionally, while it has been shown that such forces scale directly with anticipated weight (including the integration of visual size cues in the anticipation of object weight) [
<xref rid="pone.0119794.ref004" ref-type="bibr">4</xref>
,
<xref rid="pone.0119794.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0119794.ref009" ref-type="bibr">9</xref>
,
<xref rid="pone.0119794.ref040" ref-type="bibr">40</xref>
<xref rid="pone.0119794.ref042" ref-type="bibr">42</xref>
], the precise quantitative relationship between applied force and weight expectation (i.e., how many Newtons or Newtons/second reflect an expectation of how many grams) remains unclear. Verbal report thus serves as a purer measure of quantitative, perceptual expectations of weight relationships, and so was selected as the response measure for this study.</p>
<p>Thus, in contrast to these previous reports of qualitative learning, the current findings show that rather than simply relying on a heuristic-like rule that “smaller objects are typically denser” in the environment, or “objects in this setting have been manipulated such that smaller objects are heavier than larger objects” [
<xref rid="pone.0119794.ref006" ref-type="bibr">6</xref>
], the nervous system appears to encode the
<italic>precise shape</italic>
of the nonlinear function relating an object’s size to its typical weight, i.e., that objects become denser more quickly the smaller they become following a power function of volume, and these expectations are available to the perceptual system as well as the motor system. This suggests an impressive degree of statistical learning capacity, in that the nervous system has had to extract the non-linear relationship between size and weight from a large of set environmental objects that vary in nearly every conceivable dimension—including shape, color, material (homogenous and heterogeneous), size, weight, and density—and extract the statistical relationship between size and weight buried in the enormously noisy and variable set of data to a remarkable degree of quantitative precision. To our knowledge, this is the first demonstration of quantitative encoding and usage of any joint environmental statistic. The current findings thus inform the field of visuohaptic and visuomotor integration: The predictive step in forward models of motor control is crucial to their ability to demonstrate adaptive and precise motor behavior [
<xref rid="pone.0119794.ref001" ref-type="bibr">1</xref>
<xref rid="pone.0119794.ref003" ref-type="bibr">3</xref>
].</p>
<p>These findings also have interesting implications for studies of heaviness perception and in particular the size-weight illusion (SWI), in which the smaller of two equally-weighted and similar-looking objects feels heavier than the larger [
<xref rid="pone.0119794.ref043" ref-type="bibr">43</xref>
] despite no asymmetry in motor force production [
<xref rid="pone.0119794.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0119794.ref034" ref-type="bibr">34</xref>
]. Evidence suggests that visual and haptic information is combined with prior expectations when lifting novel objects to produce the sensation of heaviness [
<xref rid="pone.0119794.ref007" ref-type="bibr">7</xref>
<xref rid="pone.0119794.ref009" ref-type="bibr">9</xref>
,
<xref rid="pone.0119794.ref044" ref-type="bibr">44</xref>
]. To date, studies of the SWI assume, either implicitly or explicitly, that observers expect that differently-sized objects appearing to be made out of the same material will possess the same density [
<xref rid="pone.0119794.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0119794.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0119794.ref033" ref-type="bibr">33</xref>
]. Our findings demonstrate that this assumption is flawed, since density is not independent of volume for liftable, man-made objects and the nervous system is sensitive to this statistical regularity. It should be noted, however, that even if observers believe smaller objects are denser, they still expect larger ones to be heavier, albeit not by enough to match the size discrepancy; thus, the source of the SWI remains elusive, but it is evident that more investigation is required (a recent review sums up current theories of the SWI and other weight illusions [
<xref rid="pone.0119794.ref037" ref-type="bibr">37</xref>
]).</p>
<p>Of course, the contribution of density variation itself to heaviness perception has been studied extensively. Researchers have consistently noted that denser objects are perceived as heavier, and that perceived heaviness is a function of an object’s size, shape, and density [
<xref rid="pone.0119794.ref025" ref-type="bibr">25</xref>
,
<xref rid="pone.0119794.ref045" ref-type="bibr">45</xref>
<xref rid="pone.0119794.ref048" ref-type="bibr">48</xref>
]. Given the importance of physical density in heaviness perception, it is therefore surprising that prediction of weight based on
<italic>predicted</italic>
density given an object’s
<italic>size</italic>
(rather than material) has been largely neglected in studies heaviness perception.</p>
<p>Our results show for the first time that (a) for man-made, liftable objects, density and volume are not independent in the everyday environment; and (b) the human nervous system can represent this complex relationship and abstract from it to generate accurate quantitative expectations about novel objects’ weight relationships. Similarly incorporating quantitative prior knowledge may improve estimates of object weight in artificial systems as well, providing an environmentally-based foundation for the predictive step in forward internal models of motor control [
<xref rid="pone.0119794.ref001" ref-type="bibr">1</xref>
<xref rid="pone.0119794.ref003" ref-type="bibr">3</xref>
]. Finally, knowledge of these statistics is available to the perceptual system, yet was likely acquired through experience lifting and manipulating objects. Thus, not only can perception influence action [
<xref rid="pone.0119794.ref049" ref-type="bibr">49</xref>
], but past actions may influence perception as well.</p>
</sec>
<sec sec-type="supplementary-material" id="sec011">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0119794.s001">
<label>S1 Fig</label>
<caption>
<title>Non-log-transformed data.</title>
<p>(a) 3-D scanned liftable artificial objects, (b) 3-D scanned natural objects, (c) liftable man-made objects collected by online survey, and (d) artificial but
<italic>unliftable</italic>
objects collected via online survey.</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0119794.s001.tif">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0119794.s002">
<label>S1 Dataset</label>
<caption>
<title>Tape measure database (Dataset 1 in
<xref rid="pone.0119794.s002" ref-type="supplementary-material">S1 Dataset</xref>
).</title>
<p>(XLSX)</p>
</caption>
<media xlink:href="pone.0119794.s002.xlsx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0119794.s003">
<label>S2 Dataset</label>
<caption>
<title>Online object database (Datasets 2 and 5 in
<xref rid="pone.0119794.s003" ref-type="supplementary-material">S2 Dataset</xref>
).</title>
<p>(XLSX)</p>
</caption>
<media xlink:href="pone.0119794.s003.xlsx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0119794.s004">
<label>S3 Dataset</label>
<caption>
<title>3-D scanned object database (Datasets 3 and 4 in
<xref rid="pone.0119794.s004" ref-type="supplementary-material">S3 Dataset</xref>
).</title>
<p>(XLSX)</p>
</caption>
<media xlink:href="pone.0119794.s004.xlsx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0119794.s005">
<label>S4 Dataset</label>
<caption>
<title>Perceptual experiment data.</title>
<p>(SAV)</p>
</caption>
<media xlink:href="pone.0119794.s005.sav">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Dean Buonomano, Hakwan Lau, Hongjing Lu, Aaron Seitz, David Rosenbaum, Stefano Soatto, and Angela Yu for helpful discussions.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0119794.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kawato</surname>
<given-names>M</given-names>
</name>
.
<article-title>Internal models for motor control and trajectory planning</article-title>
.
<source>Curr Opin Neurobiol</source>
<year>1999</year>
;
<volume>9</volume>
:
<fpage>718</fpage>
<lpage>727</lpage>
.
<pub-id pub-id-type="pmid">10607637</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Miall</surname>
<given-names>RC</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
.
<article-title>Forward Models for Physiological Motor Control</article-title>
.
<source>Neural Networks</source>
<year>1996</year>
;
<volume>9</volume>
:
<fpage>1265</fpage>
<lpage>1279</lpage>
.
<pub-id pub-id-type="pmid">12662535</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Scott</surname>
<given-names>SH</given-names>
</name>
.
<article-title>Optimal feedback control and the neural basis of volitional motor control</article-title>
.
<source>Nat Rev Neurosci</source>
<year>2004</year>
;
<volume>5</volume>
:
<fpage>532</fpage>
<lpage>546</lpage>
.
<pub-id pub-id-type="pmid">15208695</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buckingham</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Cant</surname>
<given-names>JS</given-names>
</name>
,
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Living in a material world: how visual cues to material properties affect the way that we lift objects and perceive their weight</article-title>
.
<source>J Neurophysiol</source>
<year>2009</year>
;
<volume>102</volume>
:
<fpage>3111</fpage>
<lpage>3118</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/jn.00515.2009">10.1152/jn.00515.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19793879</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buckingham</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Size Matters: A Single Representation Underlies Our Perceptions of Heaviness in the Size-Weight Illusion</article-title>
.
<source>PLoS One</source>
<year>2013</year>
;
<volume>8</volume>
:
<fpage>e54709</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0054709">10.1371/journal.pone.0054709</ext-link>
</comment>
<pub-id pub-id-type="pmid">23372759</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Bittner</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
.
<article-title>Experience can change distinct size-weight priors engaged in lifting objects and judging their weights</article-title>
.
<source>Curr Biol</source>
<year>2008</year>
;
<volume>18</volume>
:
<fpage>1742</fpage>
<lpage>1747</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cub.2008.09.042">10.1016/j.cub.2008.09.042</ext-link>
</comment>
<pub-id pub-id-type="pmid">19026545</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
.
<article-title>Sensorimotor prediction and memory in object manipulation</article-title>
.
<source>Can J Exp Psychol</source>
<year>2001</year>
;
<volume>55</volume>
:
<fpage>87</fpage>
<lpage>95</lpage>
.
<pub-id pub-id-type="pmid">11433790</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gordon</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Forssberg</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Westling</surname>
<given-names>G</given-names>
</name>
.
<article-title>Integration of sensory information during the programming of precision grip: comments on the contributions of size cues</article-title>
.
<source>Exp Brain Res</source>
<year>1991</year>
;
<volume>85</volume>
:
<fpage>226</fpage>
<lpage>229</lpage>
.
<pub-id pub-id-type="pmid">1884761</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gordon</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Forssberg</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Westling</surname>
<given-names>G</given-names>
</name>
.
<article-title>Visual size cues in the programming of manipulative forces during precision grip</article-title>
.
<source>Exp Brain Res</source>
<year>1991</year>
;
<volume>83</volume>
:
<fpage>477</fpage>
<lpage>482</lpage>
.
<pub-id pub-id-type="pmid">2026190</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mon-Williams</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Murray</surname>
<given-names>AH</given-names>
</name>
.
<article-title>The size of the visual size cue used for programming manipulative forces during precision grip</article-title>
.
<source>Exp Brain Res</source>
<year>2000</year>
;
<volume>135</volume>
:
<fpage>405</fpage>
<lpage>410</lpage>
.
<pub-id pub-id-type="pmid">11146818</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Adams</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Graf</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
.
<article-title>Experience can change the “light-from-above” prior</article-title>
.
<source>Nat Neurosci</source>
<year>2004</year>
;
<volume>7</volume>
:
<fpage>1057</fpage>
<lpage>1058</lpage>
.
<pub-id pub-id-type="pmid">15361877</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hedges</surname>
<given-names>JH</given-names>
</name>
,
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
.
<article-title>Optimal inference explains the perceptual coherence of visual motion stimuli</article-title>
.
<source>J Vis</source>
<year>2011</year>
;
<volume>11</volume>
:
<fpage>1</fpage>
<lpage>16</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Adelson</surname>
<given-names>EH</given-names>
</name>
.
<article-title>Motion illusions as optimal percepts</article-title>
.
<source>Nat Neurosci</source>
<year>2002</year>
;
<volume>5</volume>
:
<fpage>598</fpage>
<lpage>604</lpage>
.
<pub-id pub-id-type="pmid">12021763</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
.
<article-title>Cardinal rules: visual orientation perception reflects knowledge of environmental statistics</article-title>
.
<source>Nat Neurosci</source>
<year>2011</year>
;
<volume>14</volume>
:
<fpage>926</fpage>
<lpage>932</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn.2831">10.1038/nn.2831</ext-link>
</comment>
<pub-id pub-id-type="pmid">21642976</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Goldreich</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Peterson</surname>
<given-names>MA</given-names>
</name>
.
<article-title>A Bayesian observer replicates convexity context effects in figure-ground perception</article-title>
.
<source>Seeing Perceiving</source>
<year>2011</year>
;
<volume>25</volume>
:
<fpage>365</fpage>
<lpage>395</lpage>
.
<pub-id pub-id-type="pmid">22564398</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
,
<name>
<surname>Newport</surname>
<given-names>E</given-names>
</name>
.
<article-title>Statistical learning: from acquiring specific items to forming general rules</article-title>
.
<source>Curr Dir Psychol Sci</source>
<year>2012</year>
;
<volume>21</volume>
:
<fpage>170</fpage>
<lpage>176</lpage>
.
<pub-id pub-id-type="pmid">24000273</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref017">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fiser</surname>
<given-names>J</given-names>
</name>
.
<article-title>Perceptual learning and representational learning in humans and animals</article-title>
.
<source>Learn Behav</source>
<year>2009</year>
;
<volume>37</volume>
:
<fpage>141</fpage>
<lpage>153</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/LB.37.2.141">10.3758/LB.37.2.141</ext-link>
</comment>
<pub-id pub-id-type="pmid">19380891</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hunt</surname>
<given-names>RRH</given-names>
</name>
,
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
.
<article-title>Statistical learning in a serial reaction time task: access to separable statistical cues by individual learners</article-title>
.
<source>J Exp Psychol Gen</source>
<year>2001</year>
;
<volume>130</volume>
:
<fpage>658</fpage>
<pub-id pub-id-type="pmid">11757874</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mitchel</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Weiss</surname>
<given-names>DJ</given-names>
</name>
.
<article-title>Learning across senses: cross-modal effects in multisensory statistical learning</article-title>
.
<source>J Exp Psychol Learn Mem Cogn</source>
<year>2011</year>
;
<volume>37</volume>
:
<fpage>1081</fpage>
<lpage>1091</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/a0023700">10.1037/a0023700</ext-link>
</comment>
<pub-id pub-id-type="pmid">21574745</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Seitz</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Kim</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>van Wassenhove</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
.
<article-title>Simultaneous and independent acquisition of multisensory and unisensory associations</article-title>
.
<source>Perception</source>
<year>2007</year>
;
<volume>36</volume>
:
<fpage>1445</fpage>
<lpage>1453</lpage>
.
<pub-id pub-id-type="pmid">18265827</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gekas</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Chalk</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Seitz</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Seriès</surname>
<given-names>P</given-names>
</name>
.
<article-title>Complexity and specificity of experimentally induced expectations in motion perception</article-title>
.
<source>J Vis</source>
<year>2013</year>
;
<volume>13</volume>
:
<fpage>1</fpage>
<lpage>18</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1167/13.14.1">10.1167/13.14.1</ext-link>
</comment>
<pub-id pub-id-type="pmid">24297775</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Seriès</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Seitz</surname>
<given-names>A</given-names>
</name>
.
<article-title>Learning what to expect (in visual perception)</article-title>
.
<source>Front Hum Neurosci 20113</source>
;
<volume>7</volume>
:
<fpage>1</fpage>
<lpage>14</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref023">
<label>23</label>
<mixed-citation publication-type="other">
<name>
<surname>Balzer</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Peters</surname>
<given-names>MAK</given-names>
</name>
,
<name>
<surname>Soatto</surname>
<given-names>S</given-names>
</name>
.
<article-title>Volumetric Reconstruction Applied to Perceptual Studies of Size and Weight</article-title>
.
<source>IEEE Workshop on Applications of Computer Vision (WACV)</source>
<year>2013</year>
. Available: arXiv:1311.2642. Accessed 1 December 2013.</mixed-citation>
</ref>
<ref id="pone.0119794.ref024">
<label>24</label>
<mixed-citation publication-type="other">
<name>
<surname>Soatto</surname>
<given-names>S</given-names>
</name>
.
<source>Steps towards a theory of visual information: Active perception, signal-to-symbol conversion and the interplay between sensing and control</source>
.
<year>2011</year>
. Available: arXiv:11102053v3. Accessed 12 February 2013.</mixed-citation>
</ref>
<ref id="pone.0119794.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ross</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>di Lollo</surname>
<given-names>V</given-names>
</name>
.
<article-title>Differences in heaviness in relation to density and weight</article-title>
.
<source>Percept Psychophys</source>
<year>1970</year>
;
<volume>7</volume>
:
<fpage>161</fpage>
<lpage>162</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ross</surname>
<given-names>HE</given-names>
</name>
,
<name>
<surname>Reschke</surname>
</name>
.
<article-title>Mass estimation and discrimination during brief periods of zero gravity</article-title>
.
<source>Percept Psychophys</source>
<year>1982</year>
;
<volume>31</volume>
:
<fpage>429</fpage>
<lpage>436</lpage>
.
<pub-id pub-id-type="pmid">7110901</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sanborn</surname>
<given-names>AN</given-names>
</name>
,
<name>
<surname>Mansinghka</surname>
<given-names>VK</given-names>
</name>
,
<name>
<surname>Griffiths</surname>
<given-names>TL</given-names>
</name>
.
<article-title>Reconciling intuitive physics and Newtonian mechanics for colliding objects</article-title>
.
<source>Psychol Rev</source>
<year>2013</year>
;
<volume>120</volume>
:
<fpage>411</fpage>
<lpage>437</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/a0031912">10.1037/a0031912</ext-link>
</comment>
<pub-id pub-id-type="pmid">23458084</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lilliefors</surname>
<given-names>HW</given-names>
</name>
.
<article-title>On the Kolmogorov-Smirnov test for normality with mean and variance unknown</article-title>
.
<source>J Am Stat Assoc</source>
<year>1967</year>
;
<volume>62</volume>
:
<fpage>399</fpage>
<lpage>402</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Frayman</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Dawson</surname>
<given-names>W</given-names>
</name>
.
<article-title>The effect of object shape and mode of presentation on judgments of apparent volume</article-title>
.
<source>Percept Psychophys</source>
<year>1981</year>
;
<volume>29</volume>
:
<fpage>56</fpage>
<lpage>62</lpage>
.
<pub-id pub-id-type="pmid">7243531</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benjamini</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Hochberg</surname>
<given-names>Y</given-names>
</name>
.
<article-title>Controlling the false discovery rate: a practical and powerful approach to multiple testing</article-title>
.
<source>J R Stat Soc Ser B</source>
<year>1995</year>
;
<volume>57</volume>
:
<fpage>289</fpage>
<lpage>300</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benjamini</surname>
<given-names>Y</given-names>
</name>
.
<article-title>Discovering the false discovery rate</article-title>
.
<source>J R Stat Soc Ser B (Statistical Methodol)</source>
<year>2010</year>
;
<volume>72</volume>
:
<fpage>405</fpage>
<lpage>416</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brayanov</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Smith</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Bayesian and “anti-Bayesian” biases in sensory integration for action and perception in the size–weight illusion</article-title>
.
<source>J Neurophysiol</source>
<year>2010</year>
;
<volume>103</volume>
:
<fpage>1518</fpage>
<lpage>1531</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/jn.00814.2009">10.1152/jn.00814.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">20089821</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Beltzner</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Independence of perceptual and sensorimotor predictions in the size–weight illusion</article-title>
.
<source>Nat Neurosci</source>
<year>2000</year>
;
<volume>3</volume>
:
<fpage>737</fpage>
<lpage>741</lpage>
.
<pub-id pub-id-type="pmid">10862708</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grandy</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Westwood</surname>
<given-names>DA</given-names>
</name>
.
<article-title>Opposite perceptual and sensorimotor responses to a size-weight illusion</article-title>
.
<source>J Neurophysiol</source>
<year>2006</year>
;
<volume>95</volume>
:
<fpage>3887</fpage>
<lpage>3892</lpage>
.
<pub-id pub-id-type="pmid">16641383</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref035">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Geisler</surname>
<given-names>WS</given-names>
</name>
,
<name>
<surname>Perry</surname>
<given-names>JS</given-names>
</name>
.
<article-title>Contour statistics in natural images: grouping across occlusions</article-title>
.
<source>Vis Neurosci</source>
<year>2009</year>
;
<volume>26</volume>
:
<fpage>109</fpage>
<lpage>121</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1017/S0952523808080875">10.1017/S0952523808080875</ext-link>
</comment>
<pub-id pub-id-type="pmid">19216819</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schwartz</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Hsu</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Dayan</surname>
<given-names>P</given-names>
</name>
.
<article-title>Space and time in visual context</article-title>
.
<source>Nat Rev Neurosci</source>
<year>2007</year>
;
<volume>8</volume>
:
<fpage>522</fpage>
<lpage>535</lpage>
.
<pub-id pub-id-type="pmid">17585305</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buckingham</surname>
<given-names>G</given-names>
</name>
.
<article-title>Getting a grip on heaviness perception: a review of weight illusions and their probable causes</article-title>
.
<source>Exp brain Res</source>
<year>2014</year>
;
<volume>232</volume>
:
<fpage>1623</fpage>
<lpage>1629</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-014-3926-9">10.1007/s00221-014-3926-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">24691760</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Buckingham</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
.
<article-title>The influence of competing perceptual and motor priors in the context of the size–weight illusion</article-title>
.
<source>Exp Brain Res</source>
<year>2010</year>
;
<volume>205</volume>
:
<fpage>283</fpage>
<lpage>288</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-010-2353-9">10.1007/s00221-010-2353-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">20614213</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hadjiosif</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Smith</surname>
<given-names>MA</given-names>
</name>
.
<article-title>The motor system estimates uncertainty and higher order statistics for the control of grip forces</article-title>
.
<source>Conf Proc IEEE Eng Med Biol Soc</source>
<year>2011</year>
:
<fpage>4057</fpage>
<lpage>4059</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1109/IEMBS.2011.6091008">10.1109/IEMBS.2011.6091008</ext-link>
</comment>
<pub-id pub-id-type="pmid">22255231</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Crajé</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Santello</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Gordon</surname>
<given-names>AM</given-names>
</name>
.
<article-title>Effects of Visual Cues of Object Density on Perception and Anticipatory Control of Dexterous Manipulation</article-title>
.
<source>PLoS One</source>
<year>2013</year>
;
<volume>8</volume>
:
<fpage>e76855</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0076855">10.1371/journal.pone.0076855</ext-link>
</comment>
<pub-id pub-id-type="pmid">24146935</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Westling</surname>
<given-names>G</given-names>
</name>
.
<article-title>Programmed and triggered actions to rapid load changes during precision grip</article-title>
.
<source>Exp Brain Res</source>
<year>1988</year>
;
<volume>71</volume>
:
<fpage>72</fpage>
<lpage>86</lpage>
.
<pub-id pub-id-type="pmid">3416959</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gordon</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Forssberg</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Eliasson</surname>
<given-names>AC</given-names>
</name>
,
<name>
<surname>Westling</surname>
<given-names>G</given-names>
</name>
.
<article-title>Development of human precision grip. III. Integration of visual size cues during the programming of isometric forces</article-title>
.
<source>Exp Brain Res</source>
<year>1992</year>
;
<volume>90</volume>
:
<fpage>399</fpage>
<lpage>403</lpage>
.
<pub-id pub-id-type="pmid">1397154</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref043">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chapentier</surname>
<given-names>A</given-names>
</name>
.
<article-title>Analyse experimentale de quelques elements de la sensation de poids-Experimental study of some aspects of weight perception</article-title>
.
<source>Arch Physiol Norm Pathol</source>
<year>1891</year>
;
<volume>3</volume>
:
<fpage>122</fpage>
<lpage>135</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref044">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Baugh</surname>
<given-names>LA</given-names>
</name>
,
<name>
<surname>Kao</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
.
<article-title>Material evidence: interaction of well-learned priors and sensorimotor memory when lifting objects</article-title>
.
<source>J Neurophysiol</source>
<year>2012</year>
;
<volume>108</volume>
:
<fpage>1262</fpage>
<lpage>1269</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/jn.00263.2012">10.1152/jn.00263.2012</ext-link>
</comment>
<pub-id pub-id-type="pmid">22696542</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0119794.ref045">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Anderson</surname>
<given-names>N</given-names>
</name>
.
<article-title>Averaging model applied to the size-weight illusion</article-title>
.
<source>Percept Psychophys</source>
<year>1970</year>
;
<volume>8</volume>
:
<fpage>1</fpage>
<lpage>4</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref046">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Cross</surname>
<given-names>D V</given-names>
</name>
,
<name>
<surname>Rotkin</surname>
<given-names>L</given-names>
</name>
.
<article-title>The relation between size and apparent heaviness</article-title>
.
<source>Attention</source>
<year>1975</year>
;
<volume>18</volume>
:
<fpage>79</fpage>
<lpage>87</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref047">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Huang</surname>
<given-names>I</given-names>
</name>
.
<article-title>The size-weight illusion and the weight-density illusion</article-title>
.
<source>J Gen Psychol</source>
<year>1945</year>
;
<volume>33</volume>
:
<fpage>65</fpage>
<lpage>84</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref048">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stevens</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Rubin</surname>
<given-names>LL</given-names>
</name>
.
<article-title>Psychophysical scales of apparent heaviness and the size-weight illusion</article-title>
.
<source>Percept Psychophys</source>
<year>1970</year>
;
<volume>8</volume>
:
<fpage>225</fpage>
<lpage>230</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0119794.ref049">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Transforming vision into action</article-title>
.
<source>Vision Res</source>
<year>2011</year>
;
<volume>51</volume>
:
<fpage>1567</fpage>
<lpage>1587</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.visres.2010.07.027">10.1016/j.visres.2010.07.027</ext-link>
</comment>
<pub-id pub-id-type="pmid">20691202</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000273 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000273 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4358826
   |texte=   Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:25768977" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024