Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual-haptic cue weighting is independent of modality-specific attention.

Identifieur interne : 001505 ( PubMed/Corpus ); précédent : 001504; suivant : 001506

Visual-haptic cue weighting is independent of modality-specific attention.

Auteurs : Hannah B. Helbig ; Marc O. Ernst

Source :

RBID : pubmed:18318624

English descriptors

Abstract

Some object properties (e.g., size, shape, and depth information) are perceived through multiple sensory modalities. Such redundant sensory information is integrated into a unified percept. The integrated estimate is a weighted average of the sensory estimates, where higher weight is attributed to the more reliable sensory signal. Here we examine whether modality-specific attention can affect multisensory integration. Selectively reducing attention in one sensory channel can reduce the relative reliability of the estimate derived from this channel and might thus alter the weighting of the sensory estimates. In the present study, observers performed unimodal (visual and haptic) and bimodal (visual-haptic) size discrimination tasks. They either performed the primary task alone or they performed a secondary task simultaneously (dual task). The secondary task consisted of a same/different judgment of rapidly presented visual letter sequences, and so might be expected to withdraw attention predominantly from the visual rather than the haptic channel. Comparing size discrimination performance in single- and dual-task conditions, we found that vision-based estimates were more affected by the secondary task than the haptics-based estimates, indicating that indeed attention to vision was more reduced than attention to haptics. This attentional manipulation, however, did not affect the cue weighting in the bimodal task. Bimodal discrimination performance was better than unimodal performance in both single- and dual-task conditions, indicating that observers still integrate visual and haptic size information in the dual-task condition, when attention is withdrawn from vision. These findings indicate that visual-haptic cue weighting is independent of modality-specific attention.

DOI: 10.1167/8.1.21
PubMed: 18318624

Links to Exploration step

pubmed:18318624

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual-haptic cue weighting is independent of modality-specific attention.</title>
<author>
<name sortKey="Helbig, Hannah B" sort="Helbig, Hannah B" uniqKey="Helbig H" first="Hannah B" last="Helbig">Hannah B. Helbig</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. hannah.helbig@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O" last="Ernst">Marc O. Ernst</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2008">2008</date>
<idno type="doi">10.1167/8.1.21</idno>
<idno type="RBID">pubmed:18318624</idno>
<idno type="pmid">18318624</idno>
<idno type="wicri:Area/PubMed/Corpus">001505</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Visual-haptic cue weighting is independent of modality-specific attention.</title>
<author>
<name sortKey="Helbig, Hannah B" sort="Helbig, Hannah B" uniqKey="Helbig H" first="Hannah B" last="Helbig">Hannah B. Helbig</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. hannah.helbig@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O" last="Ernst">Marc O. Ernst</name>
</author>
</analytic>
<series>
<title level="j">Journal of vision</title>
<idno type="eISSN">1534-7362</idno>
<imprint>
<date when="2008" type="published">2008</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adolescent</term>
<term>Adult</term>
<term>Attention (physiology)</term>
<term>Discrimination (Psychology) (physiology)</term>
<term>Female</term>
<term>Humans</term>
<term>Photic Stimulation</term>
<term>Sensory Thresholds (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Attention</term>
<term>Discrimination (Psychology)</term>
<term>Sensory Thresholds</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adolescent</term>
<term>Adult</term>
<term>Female</term>
<term>Humans</term>
<term>Photic Stimulation</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Some object properties (e.g., size, shape, and depth information) are perceived through multiple sensory modalities. Such redundant sensory information is integrated into a unified percept. The integrated estimate is a weighted average of the sensory estimates, where higher weight is attributed to the more reliable sensory signal. Here we examine whether modality-specific attention can affect multisensory integration. Selectively reducing attention in one sensory channel can reduce the relative reliability of the estimate derived from this channel and might thus alter the weighting of the sensory estimates. In the present study, observers performed unimodal (visual and haptic) and bimodal (visual-haptic) size discrimination tasks. They either performed the primary task alone or they performed a secondary task simultaneously (dual task). The secondary task consisted of a same/different judgment of rapidly presented visual letter sequences, and so might be expected to withdraw attention predominantly from the visual rather than the haptic channel. Comparing size discrimination performance in single- and dual-task conditions, we found that vision-based estimates were more affected by the secondary task than the haptics-based estimates, indicating that indeed attention to vision was more reduced than attention to haptics. This attentional manipulation, however, did not affect the cue weighting in the bimodal task. Bimodal discrimination performance was better than unimodal performance in both single- and dual-task conditions, indicating that observers still integrate visual and haptic size information in the dual-task condition, when attention is withdrawn from vision. These findings indicate that visual-haptic cue weighting is independent of modality-specific attention.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">18318624</PMID>
<DateCreated>
<Year>2008</Year>
<Month>03</Month>
<Day>05</Day>
</DateCreated>
<DateCompleted>
<Year>2008</Year>
<Month>03</Month>
<Day>20</Day>
</DateCompleted>
<DateRevised>
<Year>2008</Year>
<Month>04</Month>
<Day>29</Day>
</DateRevised>
<Article PubModel="Electronic">
<Journal>
<ISSN IssnType="Electronic">1534-7362</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>8</Volume>
<Issue>1</Issue>
<PubDate>
<Year>2008</Year>
</PubDate>
</JournalIssue>
<Title>Journal of vision</Title>
<ISOAbbreviation>J Vis</ISOAbbreviation>
</Journal>
<ArticleTitle>Visual-haptic cue weighting is independent of modality-specific attention.</ArticleTitle>
<Pagination>
<MedlinePgn>21.1-16</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1167/8.1.21</ELocationID>
<Abstract>
<AbstractText>Some object properties (e.g., size, shape, and depth information) are perceived through multiple sensory modalities. Such redundant sensory information is integrated into a unified percept. The integrated estimate is a weighted average of the sensory estimates, where higher weight is attributed to the more reliable sensory signal. Here we examine whether modality-specific attention can affect multisensory integration. Selectively reducing attention in one sensory channel can reduce the relative reliability of the estimate derived from this channel and might thus alter the weighting of the sensory estimates. In the present study, observers performed unimodal (visual and haptic) and bimodal (visual-haptic) size discrimination tasks. They either performed the primary task alone or they performed a secondary task simultaneously (dual task). The secondary task consisted of a same/different judgment of rapidly presented visual letter sequences, and so might be expected to withdraw attention predominantly from the visual rather than the haptic channel. Comparing size discrimination performance in single- and dual-task conditions, we found that vision-based estimates were more affected by the secondary task than the haptics-based estimates, indicating that indeed attention to vision was more reduced than attention to haptics. This attentional manipulation, however, did not affect the cue weighting in the bimodal task. Bimodal discrimination performance was better than unimodal performance in both single- and dual-task conditions, indicating that observers still integrate visual and haptic size information in the dual-task condition, when attention is withdrawn from vision. These findings indicate that visual-haptic cue weighting is independent of modality-specific attention.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Helbig</LastName>
<ForeName>Hannah B</ForeName>
<Initials>HB</Initials>
<AffiliationInfo>
<Affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. hannah.helbig@tuebingen.mpg.de</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Ernst</LastName>
<ForeName>Marc O</ForeName>
<Initials>MO</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2008</Year>
<Month>01</Month>
<Day>31</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>J Vis</MedlineTA>
<NlmUniqueID>101147197</NlmUniqueID>
<ISSNLinking>1534-7362</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000293">Adolescent</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000328">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001288">Attention</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D004192">Discrimination (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005260">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010775">Photic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D012684">Sensory Thresholds</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2007</Year>
<Month>3</Month>
<Day>12</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2007</Year>
<Month>10</Month>
<Day>8</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2008</Year>
<Month>3</Month>
<Day>6</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2008</Year>
<Month>3</Month>
<Day>21</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2008</Year>
<Month>3</Month>
<Day>6</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="doi">10.1167/8.1.21</ArticleId>
<ArticleId IdType="pii">8/1/21</ArticleId>
<ArticleId IdType="pubmed">18318624</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001505 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001505 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:18318624
   |texte=   Visual-haptic cue weighting is independent of modality-specific attention.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:18318624" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024