Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 001E13 ( Pmc/Corpus ); précédent : 001E129; suivant : 001E140 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Mobile computation: Spatiotemporal integration of the properties of objects in motion</title>
<author>
<name sortKey="Cavanagh, Patrick" sort="Cavanagh, Patrick" uniqKey="Cavanagh P" first="Patrick" last="Cavanagh">Patrick Cavanagh</name>
</author>
<author>
<name sortKey="Holcombe, Alex O" sort="Holcombe, Alex O" uniqKey="Holcombe A" first="Alex O." last="Holcombe">Alex O. Holcombe</name>
</author>
<author>
<name sortKey="Chou, Weilun" sort="Chou, Weilun" uniqKey="Chou W" first="Weilun" last="Chou">Weilun Chou</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">18831615</idno>
<idno type="pmc">2612738</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2612738</idno>
<idno type="RBID">PMC:2612738</idno>
<idno type="doi">10.1167/8.12.1</idno>
<date when="2008">2008</date>
<idno type="wicri:Area/Pmc/Corpus">001E13</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">001E13</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Mobile computation: Spatiotemporal integration of the properties of objects in motion</title>
<author>
<name sortKey="Cavanagh, Patrick" sort="Cavanagh, Patrick" uniqKey="Cavanagh P" first="Patrick" last="Cavanagh">Patrick Cavanagh</name>
</author>
<author>
<name sortKey="Holcombe, Alex O" sort="Holcombe, Alex O" uniqKey="Holcombe A" first="Alex O." last="Holcombe">Alex O. Holcombe</name>
</author>
<author>
<name sortKey="Chou, Weilun" sort="Chou, Weilun" uniqKey="Chou W" first="Weilun" last="Chou">Weilun Chou</name>
</author>
</analytic>
<series>
<title level="j">Journal of vision</title>
<idno type="eISSN">1534-7362</idno>
<imprint>
<date when="2008">2008</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p id="P1">We demonstrate that, as an object moves, color and motion signals from successive, widely spaced locations are integrated, but letter and digit shapes are not. The features that integrate as an object moves match those that integrate when the eyes move but the object is stationary (spatiotopic integration). We suggest that this integration is mediated by large receptive fields gated by attention and that it occurs for surface features (motion and color) that can be summed without precise alignment but not shape features (letters or digits) that require such alignment. Rapidly alternating pairs of colors and motions were presented at several locations around a circle centered at fixation. The same two stimuli alternated at each location with the phase of the alternation reversing from one location to the next. When observers attended to only one location, the stimuli alternated in both retinal coordinates and in the attended stream: feature identification was poor. When the observer’s attention shifted around the circle in synchrony with the alternation, the stimuli still alternated at each location in retinal coordinates, but now attention always selected the same color and motion, with the stimulus appearing as a single unchanging object stepping across the locations. The maximum presentation rate at which the color and motion could be reported was twice that for stationary attention, suggesting (as control experiments confirmed) object-based integration of these features. In contrast, the identification of a letter or digit alternating with a mask showed no advantage for moving attention despite the fact that moving attention accessed (within the limits of precision for attentional selection) only the target and never the mask. The masking apparently leaves partial information that cannot be integrated across locations, and we speculate that for spatially defined patterns like letters, integration across large shifts in location may be limited by problems in aligning successive samples. Our results also suggest that as attention moves, the selection of any given location (dwell time) can be as short as 50 ms, far shorter than the typical dwell time for stationary attention. Moving attention can therefore sample a brief instant of a rapidly changing stream if it passes quickly through, giving access to events that are otherwise not seen.</p>
</div>
</front>
</TEI>
<pmc article-type="research-article">
<pmc-comment>The publisher of this article does not allow downloading of the full text in XML form.</pmc-comment>
<pmc-dir>properties manuscript</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-journal-id">101147197</journal-id>
<journal-id journal-id-type="pubmed-jr-id">30247</journal-id>
<journal-id journal-id-type="nlm-ta">J Vis</journal-id>
<journal-id journal-id-type="iso-abbrev">J Vis</journal-id>
<journal-title-group>
<journal-title>Journal of vision</journal-title>
</journal-title-group>
<issn pub-type="epub">1534-7362</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">18831615</article-id>
<article-id pub-id-type="pmc">2612738</article-id>
<article-id pub-id-type="doi">10.1167/8.12.1</article-id>
<article-id pub-id-type="manuscript">NIHMS70794</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Mobile computation: Spatiotemporal integration of the properties of objects in motion</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Cavanagh</surname>
<given-names>Patrick</given-names>
</name>
<aff id="A1">Department of Psychology, Harvard University, Cambridge, MA, USA, & Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France</aff>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Holcombe</surname>
<given-names>Alex O.</given-names>
</name>
<aff id="A2">School of Psychology, University of Sydney, Sydney, Australia</aff>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chou</surname>
<given-names>Weilun</given-names>
</name>
<aff id="A3">Department of Psychology, National Taiwan University, Taipei, Taiwan</aff>
</contrib>
</contrib-group>
<author-notes>
<corresp id="FN1">Corresponding author: Patrick Cavanagh. Email:
<email>patrick.cavanagh@univ-paris5.fr</email>
. Address: 45 rue des Saints Pères, Paris 75006, France</corresp>
</author-notes>
<pub-date pub-type="nihms-submitted">
<day>24</day>
<month>9</month>
<year>2008</year>
</pub-date>
<pub-date pub-type="epub">
<day>11</day>
<month>9</month>
<year>2008</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>31</day>
<month>12</month>
<year>2008</year>
</pub-date>
<volume>8</volume>
<issue>12</issue>
<fpage>1.1</fpage>
<lpage>123</lpage>
<abstract>
<p id="P1">We demonstrate that, as an object moves, color and motion signals from successive, widely spaced locations are integrated, but letter and digit shapes are not. The features that integrate as an object moves match those that integrate when the eyes move but the object is stationary (spatiotopic integration). We suggest that this integration is mediated by large receptive fields gated by attention and that it occurs for surface features (motion and color) that can be summed without precise alignment but not shape features (letters or digits) that require such alignment. Rapidly alternating pairs of colors and motions were presented at several locations around a circle centered at fixation. The same two stimuli alternated at each location with the phase of the alternation reversing from one location to the next. When observers attended to only one location, the stimuli alternated in both retinal coordinates and in the attended stream: feature identification was poor. When the observer’s attention shifted around the circle in synchrony with the alternation, the stimuli still alternated at each location in retinal coordinates, but now attention always selected the same color and motion, with the stimulus appearing as a single unchanging object stepping across the locations. The maximum presentation rate at which the color and motion could be reported was twice that for stationary attention, suggesting (as control experiments confirmed) object-based integration of these features. In contrast, the identification of a letter or digit alternating with a mask showed no advantage for moving attention despite the fact that moving attention accessed (within the limits of precision for attentional selection) only the target and never the mask. The masking apparently leaves partial information that cannot be integrated across locations, and we speculate that for spatially defined patterns like letters, integration across large shifts in location may be limited by problems in aligning successive samples. Our results also suggest that as attention moves, the selection of any given location (dwell time) can be as short as 50 ms, far shorter than the typical dwell time for stationary attention. Moving attention can therefore sample a brief instant of a rapidly changing stream if it passes quickly through, giving access to events that are otherwise not seen.</p>
</abstract>
<kwd-group>
<kwd>visual attention</kwd>
<kwd>spatiotemporal integration</kwd>
<kwd>color</kwd>
<kwd>motion</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source country="United States">National Eye Institute : NEI</funding-source>
<award-id>R01 EY009258-16 || EY</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001E13  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 001E13  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024