A multimodal dataset of spontaneous speech and movement production on object affordances
Identifieur interne : 000313 ( Main/Curation ); précédent : 000312; suivant : 000314A multimodal dataset of spontaneous speech and movement production on object affordances
Auteurs : Argiro Vatakis [Grèce] ; Katerina Pastra [Grèce]Source :
- Scientific Data [ 2052-4463 ] ; 2016.
Abstract
In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.
Url:
DOI: 10.1038/sdata.2015.78
PubMed: 26784391
PubMed Central: 4718047
Links toward previous steps (curation, corpus...)
- to stream Pmc, to step Corpus: Pour aller vers cette notice dans l'étape Curation :000611
- to stream Pmc, to step Curation: Pour aller vers cette notice dans l'étape Curation :000611
- to stream Pmc, to step Checkpoint: Pour aller vers cette notice dans l'étape Curation :000177
- to stream PubMed, to step Corpus: Pour aller vers cette notice dans l'étape Curation :000125
- to stream PubMed, to step Curation: Pour aller vers cette notice dans l'étape Curation :000125
- to stream PubMed, to step Checkpoint: Pour aller vers cette notice dans l'étape Curation :000158
- to stream Ncbi, to step Merge: Pour aller vers cette notice dans l'étape Curation :003F29
- to stream Ncbi, to step Curation: Pour aller vers cette notice dans l'étape Curation :003F29
- to stream Ncbi, to step Checkpoint: Pour aller vers cette notice dans l'étape Curation :003F29
- to stream Main, to step Merge: Pour aller vers cette notice dans l'étape Curation :000313
Links to Exploration step
PMC:4718047Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en">A multimodal dataset of spontaneous speech and movement production on object affordances</title>
<author><name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1"><nlm:aff id="a1"><institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author><name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1"><nlm:aff id="a1"><institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1"><nlm:aff id="a2"><institution>Institute for Language and Speech Processing (ILSP), ‘Athena’ Research Center</institution>
, 15125 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">PMC</idno>
<idno type="pmid">26784391</idno>
<idno type="pmc">4718047</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718047</idno>
<idno type="RBID">PMC:4718047</idno>
<idno type="doi">10.1038/sdata.2015.78</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000611</idno>
<idno type="wicri:Area/Pmc/Curation">000611</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000177</idno>
<idno type="wicri:source">PubMed</idno>
<idno type="wicri:Area/PubMed/Corpus">000125</idno>
<idno type="wicri:Area/PubMed/Curation">000125</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000158</idno>
<idno type="wicri:Area/Ncbi/Merge">003F29</idno>
<idno type="wicri:Area/Ncbi/Curation">003F29</idno>
<idno type="wicri:Area/Ncbi/Checkpoint">003F29</idno>
<idno type="wicri:Area/Main/Merge">000313</idno>
<idno type="wicri:Area/Main/Curation">000313</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a" type="main">A multimodal dataset of spontaneous speech and movement production on object affordances</title>
<author><name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1"><nlm:aff id="a1"><institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author><name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1"><nlm:aff id="a1"><institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1"><nlm:aff id="a2"><institution>Institute for Language and Speech Processing (ILSP), ‘Athena’ Research Center</institution>
, 15125 Athens,<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series><title level="j">Scientific Data</title>
<idno type="eISSN">2052-4463</idno>
<imprint><date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass></textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en"><p>In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.</p>
</div>
</front>
<back><div1 type="bibliography"><listBibl><biblStruct><analytic><author><name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author><name sortKey="Pastra, K" uniqKey="Pastra K">K. Pastra</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000313 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Curation/biblio.hfd -nk 000313 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= Main |étape= Curation |type= RBID |clé= PMC:4718047 |texte= A multimodal dataset of spontaneous speech and movement production on object affordances }}
Pour générer des pages wiki
HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Curation/RBID.i -Sk "pubmed:26784391" \ | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Curation/biblio.hfd \ | NlmPubMed2Wicri -a HapticV1
This area was generated with Dilib version V0.6.23. |