A multimodal dataset of spontaneous speech and movement production on object affordances.
Identifieur interne : 000125 ( PubMed/Curation ); précédent : 000124; suivant : 000126A multimodal dataset of spontaneous speech and movement production on object affordances.
Auteurs : Argiro Vatakis [Grèce] ; Katerina Pastra [Grèce]Source :
- Scientific data [ 2052-4463 ] ; 2016.
Abstract
In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of 'thinking aloud', spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.
DOI: 10.1038/sdata.2015.78
PubMed: 26784391
Links toward previous steps (curation, corpus...)
- to stream PubMed, to step Corpus: Pour aller vers cette notice dans l'étape Curation :000125
Links to Exploration step
pubmed:26784391Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en">A multimodal dataset of spontaneous speech and movement production on object affordances.</title>
<author><name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1"><nlm:affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</nlm:affiliation>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea>Cognitive Systems Research Institute (CSRI), 11525 Athens</wicri:regionArea>
</affiliation>
</author>
<author><name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1"><nlm:affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</nlm:affiliation>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea>Cognitive Systems Research Institute (CSRI), 11525 Athens</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">PubMed</idno>
<date when="2016">2016</date>
<idno type="doi">10.1038/sdata.2015.78</idno>
<idno type="RBID">pubmed:26784391</idno>
<idno type="pmid">26784391</idno>
<idno type="wicri:Area/PubMed/Corpus">000125</idno>
<idno type="wicri:Area/PubMed/Curation">000125</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en">A multimodal dataset of spontaneous speech and movement production on object affordances.</title>
<author><name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1"><nlm:affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</nlm:affiliation>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea>Cognitive Systems Research Institute (CSRI), 11525 Athens</wicri:regionArea>
</affiliation>
</author>
<author><name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1"><nlm:affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</nlm:affiliation>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea>Cognitive Systems Research Institute (CSRI), 11525 Athens</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series><title level="j">Scientific data</title>
<idno type="eISSN">2052-4463</idno>
<imprint><date when="2016" type="published">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass></textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of 'thinking aloud', spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.</div>
</front>
</TEI>
<pubmed><MedlineCitation Owner="NLM" Status="In-Process"><PMID Version="1">26784391</PMID>
<DateCreated><Year>2016</Year>
<Month>01</Month>
<Day>20</Day>
</DateCreated>
<DateRevised><Year>2016</Year>
<Month>02</Month>
<Day>13</Day>
</DateRevised>
<Article PubModel="Electronic"><Journal><ISSN IssnType="Electronic">2052-4463</ISSN>
<JournalIssue CitedMedium="Internet"><Volume>3</Volume>
<PubDate><Year>2016</Year>
</PubDate>
</JournalIssue>
<Title>Scientific data</Title>
<ISOAbbreviation>Sci Data</ISOAbbreviation>
</Journal>
<ArticleTitle>A multimodal dataset of spontaneous speech and movement production on object affordances.</ArticleTitle>
<Pagination><MedlinePgn>150078</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1038/sdata.2015.78</ELocationID>
<Abstract><AbstractText>In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of 'thinking aloud', spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y"><Author ValidYN="Y"><LastName>Vatakis</LastName>
<ForeName>Argiro</ForeName>
<Initials>A</Initials>
<AffiliationInfo><Affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y"><LastName>Pastra</LastName>
<ForeName>Katerina</ForeName>
<Initials>K</Initials>
<AffiliationInfo><Affiliation>Cognitive Systems Research Institute (CSRI), 11525 Athens, Greece.</Affiliation>
</AffiliationInfo>
<AffiliationInfo><Affiliation>Institute for Language and Speech Processing (ILSP), 'Athena' Research Center, 15125 Athens, Greece.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList><PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic"><Year>2016</Year>
<Month>01</Month>
<Day>19</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo><Country>England</Country>
<MedlineTA>Sci Data</MedlineTA>
<NlmUniqueID>101640192</NlmUniqueID>
<ISSNLinking>2052-4463</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<CommentsCorrectionsList><CommentsCorrections RefType="Cites"><RefSource>Spat Vis. 1997;10(4):437-42</RefSource>
<PMID Version="1">9176953</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>J Exp Psychol Hum Learn. 1980 Mar;6(2):174-215</RefSource>
<PMID Version="1">7373248</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Percept Psychophys. 1985 Apr;37(4):299-302</RefSource>
<PMID Version="1">4034346</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Psychol Bull. 1996 Jul;120(1):113-39</RefSource>
<PMID Version="1">8711012</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Spat Vis. 1997;10(4):433-6</RefSource>
<PMID Version="1">9176952</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Acta Psychol (Amst). 2009 Oct;132(2):173-89</RefSource>
<PMID Version="1">19298949</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Behav Res Methods. 2005 Nov;37(4):547-59</RefSource>
<PMID Version="1">16629288</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Behav Res Methods. 2008 Feb;40(1):183-90</RefSource>
<PMID Version="1">18411541</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Brain Cogn. 2009 Apr;69(3):481-9</RefSource>
<PMID Version="1">19046798</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Behav Res Methods. 2009 Aug;41(3):841-9</RefSource>
<PMID Version="1">19587200</PMID>
</CommentsCorrections>
<CommentsCorrections RefType="Cites"><RefSource>Behav Res Methods Instrum Comput. 2003 Nov;35(4):621-33</RefSource>
<PMID Version="1">14748507</PMID>
</CommentsCorrections>
</CommentsCorrectionsList>
<OtherID Source="NLM">PMC4718047</OtherID>
</MedlineCitation>
<PubmedData><History><PubMedPubDate PubStatus="received"><Year>2015</Year>
<Month>6</Month>
<Day>26</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted"><Year>2015</Year>
<Month>12</Month>
<Day>15</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez"><Year>2016</Year>
<Month>1</Month>
<Day>20</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed"><Year>2016</Year>
<Month>1</Month>
<Day>20</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline"><Year>2016</Year>
<Month>1</Month>
<Day>20</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList><ArticleId IdType="pii">sdata201578</ArticleId>
<ArticleId IdType="doi">10.1038/sdata.2015.78</ArticleId>
<ArticleId IdType="pubmed">26784391</ArticleId>
<ArticleId IdType="pmc">PMC4718047</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000125 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PubMed/Curation/biblio.hfd -nk 000125 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PubMed |étape= Curation |type= RBID |clé= pubmed:26784391 |texte= A multimodal dataset of spontaneous speech and movement production on object affordances. }}
Pour générer des pages wiki
HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Curation/RBID.i -Sk "pubmed:26784391" \ | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Curation/biblio.hfd \ | NlmPubMed2Wicri -a HapticV1
This area was generated with Dilib version V0.6.23. |