Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method

Identifieur interne : 000555 ( Pmc/Checkpoint ); précédent : 000554; suivant : 000556

Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method

Auteurs : Sander E. M. Jansen ; Wouter M. Bergmann Tiest ; Astrid M. L. Kappers

Source :

RBID : PMC:4319767

Abstract

In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.


Url:
DOI: 10.1371/journal.pone.0117017
PubMed: 25658703
PubMed Central: 4319767


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4319767

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method</title>
<author>
<name sortKey="Jansen, Sander E M" sort="Jansen, Sander E M" uniqKey="Jansen S" first="Sander E. M." last="Jansen">Sander E. M. Jansen</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bergmann Tiest, Wouter M" sort="Bergmann Tiest, Wouter M" uniqKey="Bergmann Tiest W" first="Wouter M." last="Bergmann Tiest">Wouter M. Bergmann Tiest</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25658703</idno>
<idno type="pmc">4319767</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4319767</idno>
<idno type="RBID">PMC:4319767</idno>
<idno type="doi">10.1371/journal.pone.0117017</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000268</idno>
<idno type="wicri:Area/Pmc/Curation">000268</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000555</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method</title>
<author>
<name sortKey="Jansen, Sander E M" sort="Jansen, Sander E M" uniqKey="Jansen S" first="Sander E. M." last="Jansen">Sander E. M. Jansen</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bergmann Tiest, Wouter M" sort="Bergmann Tiest, Wouter M" uniqKey="Bergmann Tiest W" first="Wouter M." last="Bergmann Tiest">Wouter M. Bergmann Tiest</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
<affiliation>
<nlm:aff id="aff001"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
<author>
<name sortKey="Vallbo, B" uniqKey="Vallbo ">ÅB Vallbo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodwin, Gm" uniqKey="Goodwin G">GM Goodwin</name>
</author>
<author>
<name sortKey="Mccloskey, Di" uniqKey="Mccloskey D">DI McCloskey</name>
</author>
<author>
<name sortKey="Matthews, Pb" uniqKey="Matthews P">PB Matthews</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lederman, Sj" uniqKey="Lederman S">SJ Lederman</name>
</author>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Withagen, A" uniqKey="Withagen A">A Withagen</name>
</author>
<author>
<name sortKey="Kappers, Aml" uniqKey="Kappers A">AML Kappers</name>
</author>
<author>
<name sortKey="Vervloed, Mpj" uniqKey="Vervloed M">MPJ Vervloed</name>
</author>
<author>
<name sortKey="Knoors, H" uniqKey="Knoors H">H Knoors</name>
</author>
<author>
<name sortKey="Verhoeven, L" uniqKey="Verhoeven L">L Verhoeven</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aggarwal, Jk" uniqKey="Aggarwal J">JK Aggarwal</name>
</author>
<author>
<name sortKey="Ryoo, Ms" uniqKey="Ryoo M">MS Ryoo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Dam, Ea" uniqKey="Van Dam E">EA van Dam</name>
</author>
<author>
<name sortKey="Van Der Harst, Je" uniqKey="Van Der Harst J">JE van der Harst</name>
</author>
<author>
<name sortKey="Ter Braak, Cjf" uniqKey="Ter Braak C">CJF ter Braak</name>
</author>
<author>
<name sortKey="Tegelenbosch, Raj" uniqKey="Tegelenbosch R">RAJ Tegelenbosch</name>
</author>
<author>
<name sortKey="Spruijt, Bm" uniqKey="Spruijt B">BM Spruijt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holden, Ej" uniqKey="Holden E">EJ Holden</name>
</author>
<author>
<name sortKey="Owens, R" uniqKey="Owens R">R Owens</name>
</author>
<author>
<name sortKey="Roy, Gg" uniqKey="Roy G">GG Roy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, X" uniqKey="Wang X">X Wang</name>
</author>
<author>
<name sortKey="Xia, M" uniqKey="Xia M">M Xia</name>
</author>
<author>
<name sortKey="Cai, H" uniqKey="Cai H">H Cai</name>
</author>
<author>
<name sortKey="Gao, Y" uniqKey="Gao Y">Y Gao</name>
</author>
<author>
<name sortKey="Cattani, C" uniqKey="Cattani C">C Cattani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oz, C" uniqKey="Oz C">C Oz</name>
</author>
<author>
<name sortKey="Leu, Mc" uniqKey="Leu M">MC Leu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Polanen, V" uniqKey="Van Polanen V">V van Polanen</name>
</author>
<author>
<name sortKey="Bergmann Tiest, Wm" uniqKey="Bergmann Tiest W">WM Bergmann Tiest</name>
</author>
<author>
<name sortKey="Kappers, Aml" uniqKey="Kappers A">AML Kappers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jansen, Sem" uniqKey="Jansen S">SEM Jansen</name>
</author>
<author>
<name sortKey="Bergmann Tiest, Wm" uniqKey="Bergmann Tiest W">WM Bergmann Tiest</name>
</author>
<author>
<name sortKey="Kappers, Aml" uniqKey="Kappers A">AML Kappers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coren, S" uniqKey="Coren S">S Coren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
<author>
<name sortKey="Reed, Cl" uniqKey="Reed C">CL Reed</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25658703</article-id>
<article-id pub-id-type="pmc">4319767</article-id>
<article-id pub-id-type="publisher-id">PONE-D-14-07976</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0117017</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method</article-title>
<alt-title alt-title-type="running-head">Automatic Annotation of Haptic Exploratory Behavior</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Jansen</surname>
<given-names>Sander E. M.</given-names>
</name>
<xref ref-type="aff" rid="aff001"></xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bergmann Tiest</surname>
<given-names>Wouter M.</given-names>
</name>
<xref ref-type="aff" rid="aff001"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kappers</surname>
<given-names>Astrid M. L.</given-names>
</name>
<xref ref-type="aff" rid="aff001"></xref>
</contrib>
</contrib-group>
<aff id="aff001">
<addr-line>MOVE Research Institute, VU University Amsterdam, the Netherlands</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Balasubramaniam</surname>
<given-names>Ramesh</given-names>
</name>
<role>Academic Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of California, Merced, UNITED STATES</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: SJ WBT AK. Performed the experiments: SJ. Analyzed the data: SJ WBT AK. Contributed reagents/materials/analysis tools: SJ WBT. Wrote the paper: SJ WBT AK.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>s.e.m.jansen@vu.nl</email>
</corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<pub-date pub-type="epub">
<day>6</day>
<month>2</month>
<year>2015</year>
</pub-date>
<volume>10</volume>
<issue>2</issue>
<elocation-id>e0117017</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>2</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>12</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-year>2015</copyright-year>
<copyright-holder>Jansen et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0117017.pdf"></self-uri>
<abstract>
<p>In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.</p>
</abstract>
<funding-group>
<funding-statement>This work has been supported by the European Commission with the Collaborative Project no. 248587, “THE Hand Embodied,” within the FP7-ICT-2009-4-2-1 program “Cognitive Systems and Robotics.” The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="6"></fig-count>
<table-count count="0"></table-count>
<page-count count="14"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>In contrast to most other senses, haptic perception requires physical contact between the body and an object of inquiry. Both cutaneous and proprioceptive sensory information can be used to explore handheld objects and evaluate their properties. Overall, we can get a rough estimate of most of its properties, and thereby identify an object, just by grasping and lifting it. However, sometimes it is required to have a more precise estimate concerning a specific property. For example, in order to insert a needle into specific tissue, a physician needs to judge its compliance in order to prevent damage to underlying tissue. In another example, when selecting a certain key from a key ring in a pocket, local differences in shape need to be considered in order to identify the correct key.</p>
<p>The human hand contains sensory receptors in the skin, muscles, and tendons [
<xref rid="pone.0117017.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0117017.ref002" ref-type="bibr">2</xref>
]. Relative motion between the skin and the surface of an object stimulates all these receptors differently. Certain hand movements stimulate receptors in such a way that yield predictable responses. Estimation of a particular object property then follows from the integration of these responses over time taking into account the relative movement over this interval. For example, when hefting an object in the hand there is little relative motion between the skin and the object surface. However, mechanoreceptors in the skin will react to changes in pressure resulting from the hefting. When this signal is combined with proprioceptive information from muscle spindles, its interpretation yields an estimate of object weight.</p>
<p>In 1987, Lederman and Klatzky [
<xref rid="pone.0117017.ref003" ref-type="bibr">3</xref>
] proposed a set of such stereotyped hand movements called Exploratory Procedures (EPs) that are optimal for retrieving specific object information. For example,
<italic>Lateral Motion</italic>
is the optimal EP for the assessment of roughness while
<italic>Enclosure</italic>
can be used to estimate volume and global shape. Such a taxonomy of hand movements is very useful when investigating haptic exploratory strategies between different groups of people (e.g., children, adults, visually impaired). In addition, these movement strategies could be implemented as distinct motor commands to drive artificial hands used to explore objects and materials in remote and dangerous environments. Currently, the annotation of haptic exploration episodes is done by observing video recordings of object handling and manually annotating it with the EPs [
<xref rid="pone.0117017.ref004" ref-type="bibr">4</xref>
]. This is a laborious task that involves substantial subjectivity on the part of the observer. In addition, it is prone to missing important behavior due to limitations in visibility as well as observers’ attention. An automatic annotation method could greatly improve the efficiency, accuracy, and consistency of these analyses.</p>
<p>Automatic behavior classification systems have been developed both for human and rodent behavior analysis. Applications for such systems include video surveillance of crowds [
<xref rid="pone.0117017.ref005" ref-type="bibr">5</xref>
] and drug testing on rats [
<xref rid="pone.0117017.ref006" ref-type="bibr">6</xref>
]. In such systems, computer vision is employed to identify behavioral patterns such as crowding, fighting, eating and sleeping. In addition, increasing interest goes out to gesture recognition in general and sign language in particular. Several sign recognition systems have been developed based on: adaptive fuzzy expert systems [
<xref rid="pone.0117017.ref007" ref-type="bibr">7</xref>
], hidden Markov models [
<xref rid="pone.0117017.ref008" ref-type="bibr">8</xref>
], and neural networks [
<xref rid="pone.0117017.ref009" ref-type="bibr">9</xref>
]. In addition to the contribution to the behavioral annotation domain, this paper could also be beneficial to the aforementioned fields by focussing on a key movement parameters to study (complex) behavior. This could lead to a better understanding of the motivation behind certain movements.</p>
<p>Even though both types of movements involve the hands, there are fundamental differences between gestures and exploratory procedures. The former involves well-defined hand postures, while the latter requires an object whose properties limit the hand movements performed on it. Furthermore, gestures are executed to convey information, while exploratory hand movements are performed to extract information. Finally, there exists a ground truth for gesturing which does not exist for exploratory hand movements. In gesturing, one specific word or sentence is communicated which allows for evaluation against this ground truth and direct comparison between different recognition systems.</p>
<p>Currently, no method exists to annotate episodes of free haptic object exploration with EPs (or any other taxonomy for that matter). However, there has been some work on classification of haptic exploration in 2D. For a search task on a haptic display, Van Polanen [
<xref rid="pone.0117017.ref010" ref-type="bibr">10</xref>
] and colleagues used two criteria to classify hand movements into three movement types. Moreover, in a recent paper [
<xref rid="pone.0117017.ref011" ref-type="bibr">11</xref>
] we found that a subset of EPs could be identified by analyzing a few hand dynamics and contact force parameters. However, there were some limitations to that approach. First, movements were performed on raised surface stimuli that did not permit free exploration. Second, this method was limited to classifying entire exploration trials into single EPs. With the current study we aim to develop an automatic annotation method capable of handling bimanual exploration of 3D objects. Moreover, this method allows for annotation of a trial with multiple intervals of multiple EPs.</p>
<p>The remainder of the paper is organized in three parts. In part I, we describe the data gathering procedure, which consisted of a discrimination task concerning several haptic object properties. Part II describes the manual and automatic annotation procedures which are applied to the data from part I. In part III, the results of the two methods are compared in order to validate the proposed automatic annotation method. This is followed by a general discussion section.</p>
</sec>
<sec id="sec002">
<title>Part I: Data Gathering</title>
<p>This part describes the discrimination experiment conducted to gather hand movement data and video recordings on which the manual and automatic annotation procedures are based.</p>
<sec id="sec002a">
<title>Participants</title>
<p>Five paid participants (one male) gave written informed consent and took part in the data gathering experiment. Their mean age was 22 years (SD = 2). All of them were strongly right-handed according to Coren’s test [
<xref rid="pone.0117017.ref012" ref-type="bibr">12</xref>
]. All followed the same experimental procedure. The study was approved by the Ethics Committee of the Faculty of Human Movement Sciences of VU University Amsterdam.</p>
</sec>
<sec id="sec002b">
<title>Materials and Methods</title>
<p>
<italic>Stimuli</italic>
—Three stimuli were used during the data gathering procedure. All consisted of a cuboid shaped core of Ebaboard (Ebalta Kunststoff GmbH) (30 × 40 × 120 mm) which was covered on four faces by a compressible mid layer and wrapped in a textured outer layer. The thickness and compliance of the mid layer combined with the variation in roughness from the outer layer resulted in stimuli that differed from each other in roughness, hardness, volume, and weight. The exact values for the different properties are irrelevant in this approach. We are only interested in the behavior that is displayed as a function of the required property. Stimulus A has a mid layer consisting of two layers of corrugated cardboard, an outer layer of structured paper, a volume of 357 cm
<sup>3</sup>
, and it weighs 141 g. Stimulus B has a mid layer of large cell PE foam, an outer layer of sandpaper, a volume of 382 cm
<sup>3</sup>
, and it weighs 131 g. Stimulus C has a mid layer of small cell PE foam, an outer layer of duct tape, a volume of 390 cm
<sup>3</sup>
, and it weighs 133 g. In addition, a narrow tunnel (∅ 2.5 mm) was drilled from the top of each stimulus to accommodate placement of an actual sensor (stimulus B) or dummy (stimuli A & C) in its center. See
<xref ref-type="fig" rid="pone.0117017.g001">Fig. 1</xref>
for a photo of the stimuli.</p>
<fig id="pone.0117017.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Stimuli used during haptic discrimination.</title>
<p>Because of different mid and outer layers, all differed in roughness, hardness, volume, and weight. Stimulus B contains an actual sensor, while stimuli A and C contain dummies.</p>
</caption>
<graphic xlink:href="pone.0117017.g001"></graphic>
</fig>
<p>
<italic>Motion Tracking</italic>
—During haptic exploration of each stimulus, the position and orientation of the index fingers and thumbs on both hands were registered with a sampling frequency of 300 Hz. In addition, a sensor was placed at the center of stimulus B. For each index finger, sensors were placed at the nail and center of the proximal phalanx. In addition, sensors were positioned on both thumbnails.
<xref ref-type="fig" rid="pone.0117017.g002">Fig. 2</xref>
depicts the positions of these electromagnetic tracking sensors (3D Guidance TrakSTAR).</p>
<fig id="pone.0117017.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Sensor placement.</title>
<p>In total, seven 6 DoF electromagnetic sensors were used during the data gathering procedure. Three on each hand and one inside the stimulus.</p>
</caption>
<graphic xlink:href="pone.0117017.g002"></graphic>
</fig>
</sec>
<sec id="sec002c">
<title>Design and Procedures</title>
<p>At the start of the experimental session participants gave informed consent after which the sensors were attached to their hands and they were blindfolded. Each trial was organized as follows: index fingers of both hands were placed at starting positions (marked tangibly on the table) approximately 20 cm on both sides of a fixed object position. The first stimulus was then placed at the object position. After the experimenter verbally stated which of the four properties (roughness, hardness, volume, or weight) was to be explored, participants used their left hand to lift the stimulus and started exploring the required property (bimanually if they preferred). When they felt that the information was acquired satisfactorily, the object was put down and they moved both hands back to the starting positions. The experimenter then placed the second stimulus on the table and they were allowed to explore that. The task was to decide whether the two stimuli differed on the required property. Participants verbally stated “equal” or “different” which concluded the trial. For every trial one of the objects was stimulus B (containing an actual sensor), while the other could be each of the three stimuli. However, participants were led to believe that many combinations of the properties existed (in the form of many stimuli) and that any of these could be presented to them. This was done to ensure that they explored each stimulus extensively to acquire information on the required property instead of identifying it and comparing the stimuli from memory. Verbal reports afterwards confirmed that this manipulation had worked. Moreover, data were only gathered during exploration of stimulus B because it contained a real sensor. In total, 24 trials were performed: 4 (properties) × 3 (pair combinations) × 2 (presentation order). However, only trials where stimulus B was presented first were used for analysis. The reason for this is that the first stimulus is explored more extensively because of its role as a reference in the comparison. In contrast, exploration of the second stimulus was often very brief.</p>
</sec>
</sec>
<sec id="sec003">
<title>Part II: Annotation</title>
<p>This part describes both the manual and automatic annotation procedures. The former utilizes video recordings of the exploration sessions while the latter is based on sensor data.</p>
<sec id="sec003a">
<title>Manual EP Annotation</title>
<p>In order to validate the automatic annotation method, it will be compared to the output of a manual annotation procedure. To that end, video recordings were made during data gathering which were then annotated by three independent observers.</p>
<sec id="sec003aa">
<title>Participants</title>
<p>Three paid participants (1 male) took part in the manual annotation procedure after giving written informed consent. Their mean age was 21 years (SD = 0.6). All followed the same experimental protocol and none of them participated in the discrimination experiment described in Part I. The study was approved by the Ethics Committee of the Faculty of Human Movement Sciences of VU University Amsterdam.</p>
</sec>
<sec id="sec003ab">
<title>Materials and Methods</title>
<p>Custom annotation software was created using LabVIEW (National Instruments, version 2011). A vertical dual monitor setup allowed participants to view the video while annotating the EPs on a timeline depicted on a separate monitor. Each of the seven behaviors (unimanual LM, PR, UH, and bimanual EN) was assigned a keyboard shortcut to hold down whenever that particular behavior was observed. Participants were able to control (frame by frame) playback using a control knob (Powermate by Griffin Technologies). Whenever participants felt they made a mistake they could adjust the annotation by decreasing or increasing the length of an interval. Alternatively, a specific EP could be cleared altogether in order to start over for that trial. Video clips were recorded with a resolution of 640 × 480 at 30 Hz.
<xref ref-type="fig" rid="pone.0117017.g003">Fig. 3</xref>
shows screenshots of both monitors during a typical annotation trial.</p>
<fig id="pone.0117017.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g003</object-id>
<label>Fig 3</label>
<caption>
<title>Manual annotation setup.</title>
<p>The top monitor shows the recorded video in fullscreen while the lower monitor enables annotation and playback options.</p>
</caption>
<graphic xlink:href="pone.0117017.g003"></graphic>
</fig>
</sec>
<sec id="sec003ac">
<title>Design and Procedures</title>
<p>A total of 60 video clips were used in this experiment: 5 people × 4 object properties × 3 trials. One trial was excluded from analysis because its video was accidentally replaced by the video of a different trial (which was then erroneously annotated). Therefore 59 trials were analyzed. All three participants viewed and annotated all clips in a randomized order. Prior to the start of the experiment a few practice clips were annotated together with the experimenter. This was done to ensure that the participants correctly understood the procedure and adhered to the EP definitions as given to them in written form. These were:
<italic>Lateral Motion</italic>
(LM): “The skin is passed laterally across a surface, producing shear force”.
<italic>Pressure</italic>
(PR): “Force is exerted on the object against a resisting force”.
<italic>Unsupported Holding</italic>
(UH): “The object is held while the hand is not externally supported”.
<italic>Enclosure</italic>
(EN): “The fingers are molded closely to the object surface”.</p>
<p>Each trial consisted of the following steps that could be repeated as often as preferred by the participant:
<list list-type="order">
<list-item>
<p>The entire clip was viewed in real time</p>
</list-item>
<list-item>
<p>For each observed EP, the clip was viewed again using the frame-by-frame and fast forward playback options to annotate the observed intervals of that EP.</p>
</list-item>
<list-item>
<p>The fully annotated clip was reviewed in real time. Only when the participant was fully satisfied with the annotation did he/she proceed to the next trial.</p>
</list-item>
</list>
</p>
</sec>
</sec>
<sec id="sec003b">
<title>Automatic EP Annotation</title>
<p>The current method enables the annotation of four of the main EPs described by Lederman and Klatzky [
<xref rid="pone.0117017.ref003" ref-type="bibr">3</xref>
]. These were chosen based on their ability to optimally extract the object properties that participants were required to assess in Part I:
<italic>Lateral Motion</italic>
for roughness,
<italic>Pressure</italic>
for hardness,
<italic>Unsupported Holding</italic>
for weight, and
<italic>Enclosure</italic>
for volume. After filtering the positional data (low-pass second-order Butterworth filter with a cutoff frequency of 6 Hz), a set of variables is computed from the position and orientation data. These variables were chosen such that they enable discrimination between the EPs.</p>
<p>The remainder of this section is used to describe the EPs and variables. First, for each of the four EPs, we state its description as given by Klatkzy and Reed [
<xref rid="pone.0117017.ref013" ref-type="bibr">13</xref>
], followed by our implementation of this and the criteria chosen for this specific setup. Then, we explain how the variables used to describe these EPs are computed from the motion tracking data. At each time step (3.33 ms), the variables for both hands are compared to the criteria and each hand might be annotated with one of the four EPs or left blank if none can be matched. Note that the
<italic>Enclosure</italic>
EP requires both hands to match the criteria while the three others can be annotated unimanually. The values belonging to the qualifications ‘low’, ‘high’, ‘small’, and ‘close’ are specific for the current setup. This method is implemented in Matlab (The Mathworks, version 2013b) and is subdivided into several parts which are executed consecutively.</p>
<sec id="sec003ba">
<title>Implementation of EPs</title>
<p>
<italic>Lateral Motion</italic>
(LM): One hand holds the stimulus while the other rubs against the surface.
<list list-type="bullet">
<list-item>
<p>At least one of the fingernails should be close to the surface (< 20 mm) and have a high relative speed compared to the stimulus (> 0.10 m/s).</p>
</list-item>
<list-item>
<p>The thumb on the opposite hand should have a low relative speed compared to the stimulus (< 0.05 m/s). Although the index finger is often used to grasp the stimulus (and therefore positioned close to the surface), it is not required as also other fingers might be used. However, for a grasp, the thumb is required.</p>
</list-item>
</list>
</p>
<p>
<italic>Pressure</italic>
(PR): The instrumented stimulus (stimulus B) consists of a compliant mid layer. Therefore, the thumb as well as one of the fingers will deform it when applying pressure.
<list list-type="bullet">
<list-item>
<p>The size of the minimal convex hull that encloses the thumb and the stimulus is less than when the stimulus is initially lifted from the table.</p>
</list-item>
<list-item>
<p>As the result of a pinching motion, there is a decrease in this hull size prior to exceeding the above mentioned threshold. This criterion is added to prevent annotation due to an outwards pointing thumb (and subsequent overestimated hull size) when picking up the stimulus.</p>
</list-item>
</list>
</p>
<p>
<italic>Unsupported Holding</italic>
(UH): We assume that unsupported holding will be performed unimanually. Due to the size and weight of the stimulus it is expected that participants perform this EP unimanually. Pilot recordings confirmed this.
<list list-type="bullet">
<list-item>
<p>The thumb is close (< 20 mm) to the stimulus surface.</p>
</list-item>
<list-item>
<p>Relative speed between the sensors on both the thumb and index finger (of one hand) and the stimulus is low (< 0.05 m/s).</p>
</list-item>
<list-item>
<p>For the opposite hand, the virtual midpoint between the thumb and index finger lies outside the stimulus volume.</p>
</list-item>
<list-item>
<p>The sensor on the thumb of the opposite hand is not close (> 20 mm) to the stimulus surface.</p>
</list-item>
</list>
</p>
<p>
<italic>Enclosure</italic>
(EN): The hand configuration is stable for a moment without deforming the object. Furthermore, the stimuli used in this study can only be enclosed using both hands. Therefore, this EP cannot be annotated unimanually.
<list list-type="bullet">
<list-item>
<p>Relative speed between all four fingernails and the stimulus is low (< 0.05 m/s).</p>
</list-item>
<list-item>
<p>For both hands, the virtual midpoint between the thumb and index finger lies within the stimulus volume.</p>
</list-item>
<list-item>
<p>For both hands, the rate of change of the index finger angle is low (< 80 deg/s).</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec003bb">
<title>Computation of Variables</title>
<p>
<italic>Relative Speed</italic>
—For all four sensors placed on fingernails (both index fingers and thumbs), the relative speed compared to the stimulus is calculated. This is defined as the rate of change of the Euclidean distance between the sensor on the nail and the sensor at the center of the stimulus.</p>
<p>
<italic>Convex Hull Size</italic>
—For both hands, we compute the size of the smallest 3D convex hull that encloses the entire stimulus and the thumbnail sensor. In addition, the rate of change of this variable is computed.</p>
<p>
<italic>Stimulus Elevation</italic>
—This is defined as the
<italic>z</italic>
-coordinate of the sensor placed inside stimulus B. In addition, the rate of change of this variable is computed.</p>
<p>
<italic>Distance to Stimulus Surface</italic>
—Each of the surface planes is represented as a grid of 45 points. For all four sensors placed on fingernails, the Euclidean distance to the closest point on the stimulus surface was calculated.</p>
<p>
<italic>Virtual Midpoint within Stimulus</italic>
—For both hands, a virtual point was computed that lies midway between the sensors placed at index finger and thumb nails. Then, for each time step it is checked whether these points lie within the stimulus volume.</p>
<p>
<italic>Index Finger Angle</italic>
—For both index fingers this is calculated as the angle between the orientation of the sensor on the nail and the sensor on the proximal phalanx of the index finger. In addition, the rate of change of this variable is computed.</p>
</sec>
<sec id="sec003bc">
<title>Post Processing</title>
<p>For each time step, the computed variables are checked against the set of criteria for each EP. Whenever all criteria for an EP are met, it is assumed that the particular behavior is displayed and annotation takes place accordingly. After this initial annotation, a post-processing phase is executed. First, short time intervals (less than 50 ms) that lie between two intervals of a certain EP, will be filled with that same annotation. Then, each annotated interval is checked against a predefined minimum duration (17 to 250 ms depending on EP). Everything shorter is deleted. Finally, there is a check whether incompatible EPs are annotated simultaneously (e.g., lateral motion and pressure with the same hand). If this is the case, then one of them is deleted according to the following dominance ranking from high to low: LM—UH—PR—EN. This order is based on the likelihood that an EP is annotated combined with the complexity of the movement. For instance,
<italic>Pressure</italic>
will cancel
<italic>Enclosure</italic>
when annotated simultaneously because the bimanual execution of the former could be described as the latter plus a deformation of the object. The post-processing routine is executed twice to ensure clean up of short intervals resulting from the simultaneity resolve.</p>
</sec>
</sec>
</sec>
<sec id="sec004">
<title>Part III: Results</title>
<p>This part describes how the automatic annotation method compares to the manual annotation. Furthermore, we investigate the relationship between the required object property and the expected EP.</p>
<sec id="sec004a">
<title>Comparison between Manual and Automatic Annotation</title>
<p>The output of both manual and automatic annotation procedures is compared in two ways, a general and a more specific approach. First, for each rater and each trial we establish the main and secondary EP based on the total duration of each EP for that particular trial. Often, at least two EPs were annotated during a trial, which is why we chose to look at the two with the longest duration. If only one or no EPs are annotated, the result is one or two blank outputs, respectively. We then evaluate the overlap in output between the automatic rater and the human raters. In 74.6% of all trials, either the main or secondary EP according to the automatic rater corresponds to the main or secondary EP according to at least one of the human raters. In comparison, for random annotation this is 53.6% (averaged over 10 runs of random annotations for all trials).</p>
<p>More specifically, we can compare the annotation outputs by evaluating the agreement over time. The output for this method is the percentage of time that two raters agree about the presence of all EPs. Here, the exact timing of the annotated EP plays a role because the evaluation of the agreement takes place at each time step. For each trial, six percentages of agreement are computed (one for each possible pair based on three human and one automatic rater). See
<xref ref-type="fig" rid="pone.0117017.g004">Fig. 4</xref>
for the output of two trials accompanied by visual representations of the comparisons.</p>
<fig id="pone.0117017.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Annotation of two example exploration trials.</title>
<p>Panel A: the left and right columns show output from all raters for a volume and roughness trial respectively. The Possible EPs are:
<italic>Pressure</italic>
(PR),
<italic>Lateral Motion</italic>
(LM),
<italic>Unsupported Holding</italic>
(UH), and
<italic>Enclosure</italic>
(EN). The width of the box represents the normalized trial duration and each annotated interval is color coded according to the legend. Panel B shows the main and secondary EP based on total duration annotated by each rater. The same legend is used to indicate the EP and hand. Panel C displays the pairwise agreement as a percentage of trial duration.</p>
</caption>
<graphic xlink:href="pone.0117017.g004"></graphic>
</fig>
<p>As a first analysis, we want to investigate if the EP agreement depends on the pair. Therefore, we performed a simple one-way ANOVA on the agreement data with
<italic>pair</italic>
(6 levels) as the only factor. The result shows that there is an effect of
<italic>pair</italic>
on mean agreement,
<italic>F</italic>
(3.2, 186) = 44;
<italic>p</italic>
< 0.001. Post-hoc analysis reveals that the three human-human pairs have higher mean agreement percentages (76, 75, 73) than the human-automatic pairs (56, 56, 55),
<italic>p</italic>
< 0.001 for all.</p>
<p>The average of the three pairs constitutes a mean agreement percentage for each pair type. For the agreement data, A 4 × 2 mixed factorial ANOVA was performed with
<italic>object property</italic>
(4 levels) as the between-group variable and
<italic>pair type</italic>
(human-human vs. human-automatic) as the repeated measures variable. Overall, there was an effect of
<italic>object property</italic>
on the annotation agreement,
<italic>F</italic>
(3, 55) = 5.4;
<italic>p</italic>
< 0.01. Post-hoc analysis revealed that agreement was higher for weight trials compared to hardness or volume trials. In addition, there was a main effect of
<italic>pair type</italic>
, indicating higher agreement between a pair of human raters (74.5%) compared to a human-automatic pair (55.7%),
<italic>F</italic>
(1, 55) = 97;
<italic>p</italic>
< 0.001. This means that on average, the automatic annotation agrees with manual annotation about the presence of all EPs for 55.7% of the trial duration (ranging between 51.2% and 64.2% for the different object properties). In comparison, the agreement between manual annotation and random annotation (one random EP for the entire duration of the trial) is 2.4%. See
<xref ref-type="fig" rid="pone.0117017.g005">Fig. 5</xref>
for a graphical representation of these results.</p>
<fig id="pone.0117017.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g005</object-id>
<label>Fig 5</label>
<caption>
<title>Mean percentage of trial duration that a pair of raters is in agreement over the annotation of all EPs.</title>
<p>Data are shown for each property separately and as the mean over all trials. The green dashed line indicates mean agreement between human and random annotation. Error bars represent standard deviation.</p>
</caption>
<graphic xlink:href="pone.0117017.g005"></graphic>
</fig>
</sec>
<sec id="sec004b">
<title>Object Properties & Expected EPs</title>
<p>In this paper we focus on the proposed automatic annotation method by comparing its output to manual annotation. Nevertheless, we would also like to investigate for the current setup the relationship between object properties and EPs. Therefore, we analyze how often the expected EP (which is optimal for a specific property) is found as the main or secondary EP by the raters. The expected links are:
<italic>Lateral Motion</italic>
for roughness,
<italic>Pressure</italic>
for hardness,
<italic>Unsupported Holding</italic>
for weight, and
<italic>Enclosure</italic>
for volume. The dependent variable is the percentage of trials where the expected EP was found as either the main or secondary EP by the raters.</p>
<p>A 4 × 2 mixed factorial ANOVA was performed with
<italic>object property</italic>
(4 levels) as the between-group variable and
<italic>rater type</italic>
(human vs. automatic) as the repeated measures variable. There is a significant main effect for
<italic>property</italic>
,
<italic>F</italic>
(3, 16) = 6.6;
<italic>p</italic>
< 0.01; in trials where volume was the required property, the percentage of trials where the expected EP (i.e., Enclosure) was found as the main or secondary EP was smaller than in trials where hardness (
<italic>p</italic>
< 0.01) and roughness (
<italic>p</italic>
= 0.01) were being assessed. The percentage of trials did not differ between raters, nor was there a significant interaction between
<italic>rater type</italic>
and
<italic>property</italic>
. See
<xref ref-type="fig" rid="pone.0117017.g006">Fig. 6</xref>
for a graphical representation of these results.</p>
<fig id="pone.0117017.g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0117017.g006</object-id>
<label>Fig 6</label>
<caption>
<title>Percentage of trials for which the expected EP was found as the main or secondary EP.</title>
<p>The expected links are:
<italic>Lateral Motion</italic>
for roughness,
<italic>Pressure</italic>
for hardness,
<italic>Unsupported Holding</italic>
for weight, and
<italic>Enclosure</italic>
for volume. This percentage is given as a function of object property and rater type. Error bars represent standard deviation.</p>
</caption>
<graphic xlink:href="pone.0117017.g006"></graphic>
</fig>
</sec>
</sec>
<sec id="sec005">
<title>General Discussion</title>
<p>We have proposed a new method for automatic annotation of haptic exploration with a subset of the Exploratory Procedures (EPs) proposed by Lederman and Klatzky [
<xref rid="pone.0117017.ref003" ref-type="bibr">3</xref>
]. The method is based on the assumption that a set of key parameters can describe these EPs. For example,
<italic>Lateral Motion</italic>
is characterized by high relative motion between the skin of one hand and the surface, while the other hand holds the object. Several distance and speed parameters can describe this behavior. The pattern of movement (i.e., up-down or circular) and the amount of force applied to the surface are irrelevant in this case. Other EPs included in this method are
<italic>Pressure</italic>
,
<italic>Unsupported Holding</italic>
, and
<italic>Enclosure</italic>
. At each time step during exploration, the values for all parameters are checked against sets of criteria defined for each EP. If all criteria for a particular EP are met, it is assumed that this type of movement is performed.</p>
<p>The proposed automatic annotation method is applied to data gathered from a discrimination experiment where participants were required to determine if two objects differed on a particular object property (e.g., hardness, roughness). In order to validate the method, it is compared to manual annotation of the same exploration episodes. Three independent observers annotated video recordings of all trials using the same set of possible EPs. Results show that in 74.6% of all trials there was overlap between the main and secondary EP (based on duration) as indicated by the automatic and manual annotations. In addition, we investigated the percentage of time that a pair of observers is in agreement about the presence of all EPs. The mean agreement between the automatic annotation and a human observer was 55.7% (compared to 74.5% for a human pair and 2.4% for random annotation). The difference in agreement between the two types of rater pairs may be explained by the fact that the automatic rater is more sensitive than the human observers. Often, it annotates small patches of
<italic>Lateral Motion</italic>
and
<italic>Pressure</italic>
which are not seen by people. This does not mean that the behavior did not take place. It just shows that people have learned to ignore (perhaps insignificant) details when presented with some other clearly visible behavior. Furthermore, it seems that agreement was higher during annotation of weight trials compared to hardness and volume. Presumably, this is due to the unambiguous execution of the unsupported holding EP. This is the optimal procedure for weight perception and seems to be easily detected by both types of observers.</p>
<p>Related to this, we examined how often the expected EP for each trial (based on the proposed links by Lederman and Klatzky [
<xref rid="pone.0117017.ref003" ref-type="bibr">3</xref>
]) was found as a function of rater type and object property. The results indicate that the automatic annotation did not differ significantly from the manual annotation with respect to this expected EP. However, for the volume trials, the expected EP (i.e.,
<italic>Enclosure</italic>
) was found less often compared to the other properties. One explanation for this is the fact that our stimuli are block-shaped and therefore their volume could be determined by estimating the three orthogonal dimensions with pinch grasps instead of enclosing it with two hands. It should be noted here that the variance of this measure was very high, meaning that the participants used different EPs when exploring a particular object property.</p>
<p>The current approach contains several limitations. First, only four EPs can be annotated at this moment. This was done to limit the amount of time required for both the acquisition of the discrimination data and the implementation of the different EPs. Second, the method assumes that objects deform under small amounts of pressure, which is not true for all handheld objects. Therefore, a useful addition to the method would be to add multiple small force-sensitive resistors to the surface of the object. This enables registration of the normal force produced on a surface in addition to the movement of fingers. Ideally, this would lead to instrumented objects that do not require any motion registration on the hands to recognize stereotypical exploratory behavior.</p>
<p>Interestingly, for some trials both the human observers and the automatic annotation method did not recognize any of the possible EPs. From an evaluation point of view, this is a good result because the annotation outputs correspond. However, this indicates that people sometimes perform hand movements that simply do not fall into one of the four categories defined as EPs. This could be due to the limited set of EPs involved in this study. However, this seems unlikely since these were the expected EPs based on the required object properties. This is an interesting observation that justifies further investigation. Perhaps the original taxonomy could benefit from extending the definitions or adding new EPs. However, we first need to gather more support for this by investigating other exploratory tasks as well as stimulus sets.</p>
<p>In order to improve this method, we need to know why certain behavior observed by human raters was not recognized by our system and vice versa. One explanation for this could be that the criteria used in the current implementation are constant over the participants from which the data are gathered even though their behavior is not identical. What is a relatively fast movement for one person, might be slow for another. The addition of a calibration phase would be very useful in this regard. By analyzing stereotypical movements, these criteria could be determined automatically for each participant.</p>
<p>Overall, it can be concluded that the proposed method shows some promising results. However, it is not yet on par with manual annotation. Note that in this case, manual annotation is not the golden standard. On average, a pair of human annotators agree for roughly 75% (ranging from 66–83%) of the duration of a trial, which indicates the subjectivity of manual annotation. This is one of the main reasons why an automatic system is preferable; one observer might see something which another does not. An automatic system has the benefits of being consistent, efficient, and transparent in its annotation. For now, it seems that the proposed method could be used as a starting point for human observers that they can edit to their liking. In that way they are encouraged to think about reasons for adding, deleting, or altering EP intervals.</p>
</sec>
<sec sec-type="supplementary-material" id="sec006">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0117017.s001">
<label>S1 Annotation Output</label>
<caption>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0117017.s001.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s002">
<label>S2 Annotation Output</label>
<caption>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0117017.s002.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s003">
<label>S3 Annotation Output</label>
<caption>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0117017.s003.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s004">
<label>S4 Annotation Output</label>
<caption>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0117017.s004.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s005">
<label>S5 Annotation Output</label>
<caption>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0117017.s005.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s006">
<label>S1 Data Example</label>
<caption>
<p>(MAT)</p>
</caption>
<media xlink:href="pone.0117017.s006.mat">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0117017.s007">
<label>S1 Matlab Script</label>
<caption>
<p>(M)</p>
</caption>
<media xlink:href="pone.0117017.s007.m">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0117017.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Vallbo</surname>
<given-names>ÅB</given-names>
</name>
(
<year>1979</year>
)
<article-title>Tactile sensibility in the human hand: relative and absolute densities of four types of mechanoreceptive units in glabrous skin</article-title>
.
<source>The Journal of physiology</source>
<volume>286</volume>
:
<fpage>283</fpage>
<lpage>300</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1113/jphysiol.1979.sp012619">10.1113/jphysiol.1979.sp012619</ext-link>
</comment>
<pub-id pub-id-type="pmid">439026</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Goodwin</surname>
<given-names>GM</given-names>
</name>
,
<name>
<surname>McCloskey</surname>
<given-names>DI</given-names>
</name>
,
<name>
<surname>Matthews</surname>
<given-names>PB</given-names>
</name>
(
<year>1972</year>
)
<article-title>Proprioceptive illusions induced by muscle vibration: contribution by muscle spindles to perception?</article-title>
<source>Science</source>
<volume>175</volume>
:
<fpage>1382</fpage>
<lpage>1384</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1126/science.175.4028.1382">10.1126/science.175.4028.1382</ext-link>
</comment>
<pub-id pub-id-type="pmid">4258209</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lederman</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
(
<year>1987</year>
)
<article-title>Hand movements: A window into haptic object recognition</article-title>
.
<source>Cognitive psychology</source>
<volume>19</volume>
:
<fpage>342</fpage>
<lpage>368</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/0010-0285(87)90008-9">10.1016/0010-0285(87)90008-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">3608405</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Withagen</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Kappers</surname>
<given-names>AML</given-names>
</name>
,
<name>
<surname>Vervloed</surname>
<given-names>MPJ</given-names>
</name>
,
<name>
<surname>Knoors</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Verhoeven</surname>
<given-names>L</given-names>
</name>
(
<year>2013</year>
)
<article-title>The use of exploratory procedures by blind and sighted adults and children</article-title>
.
<source>Attention, Perception, & Psychophysics</source>
<volume>75</volume>
:
<fpage>1451</fpage>
<lpage>1464</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13414-013-0479-0">10.3758/s13414-013-0479-0</ext-link>
</comment>
<pub-id pub-id-type="pmid">23757045</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Aggarwal</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Ryoo</surname>
<given-names>MS</given-names>
</name>
(
<year>2011</year>
)
<article-title>Human activity analysis: A review</article-title>
.
<source>ACM Computing Surveys (CSUR)</source>
<volume>43</volume>
:
<fpage>1</fpage>
<lpage>43</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1145/1922649.1922653">10.1145/1922649.1922653</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>van Dam</surname>
<given-names>EA</given-names>
</name>
,
<name>
<surname>van der Harst</surname>
<given-names>JE</given-names>
</name>
,
<name>
<surname>ter Braak</surname>
<given-names>CJF</given-names>
</name>
,
<name>
<surname>Tegelenbosch</surname>
<given-names>RAJ</given-names>
</name>
,
<name>
<surname>Spruijt</surname>
<given-names>BM</given-names>
</name>
,
<etal>et al</etal>
(
<year>2013</year>
)
<article-title>An automated system for the recognition of various specific rat behaviors</article-title>
.
<source>Journal of neuroscience methods</source>
<volume>218</volume>
:
<fpage>214</fpage>
<lpage>224</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.jneumeth.2013.05.012">10.1016/j.jneumeth.2013.05.012</ext-link>
</comment>
<pub-id pub-id-type="pmid">23769769</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Holden</surname>
<given-names>EJ</given-names>
</name>
,
<name>
<surname>Owens</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Roy</surname>
<given-names>GG</given-names>
</name>
(
<year>1996</year>
)
<article-title>Hand movement classification using an adaptive fuzzy expert system</article-title>
.
<source>International Journal of Expert Systems Research and Applications</source>
<volume>9</volume>
:
<fpage>465</fpage>
<lpage>480</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0117017.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Xia</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Cai</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Gao</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Cattani</surname>
<given-names>C</given-names>
</name>
(
<year>2012</year>
)
<article-title>Hidden-markov-models-based dynamic hand gesture recognition</article-title>
.
<source>Mathematical Problems in Engineering</source>
<volume>2012</volume>
:
<fpage>1</fpage>
<lpage>11</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1155/2012/460430">10.1155/2012/460430</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Oz</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Leu</surname>
<given-names>MC</given-names>
</name>
(
<year>2011</year>
)
<article-title>American sign language word recognition with a sensory glove using artificial neural networks</article-title>
.
<source>Engineering Applications of Artificial Intelligence</source>
<volume>24</volume>
:
<fpage>1204</fpage>
<lpage>1213</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.engappai.2011.06.015">10.1016/j.engappai.2011.06.015</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>van Polanen</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Bergmann Tiest</surname>
<given-names>WM</given-names>
</name>
,
<name>
<surname>Kappers</surname>
<given-names>AML</given-names>
</name>
(
<year>2012</year>
)
<article-title>Haptic pop-out of movable stimuli</article-title>
.
<source>Attention, Perception, & Psychophysics</source>
<volume>74</volume>
:
<fpage>204</fpage>
<lpage>215</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13414-011-0216-5">10.3758/s13414-011-0216-5</ext-link>
</comment>
<pub-id pub-id-type="pmid">22006526</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jansen</surname>
<given-names>SEM</given-names>
</name>
,
<name>
<surname>Bergmann Tiest</surname>
<given-names>WM</given-names>
</name>
,
<name>
<surname>Kappers</surname>
<given-names>AML</given-names>
</name>
(
<year>2013</year>
)
<article-title>Identifying haptic exploratory procedures by analyzing hand dynamics and contact force</article-title>
.
<source>IEEE Transactions on Haptics</source>
<volume>6</volume>
:
<fpage>464</fpage>
<lpage>472</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1109/TOH.2013.22">10.1109/TOH.2013.22</ext-link>
</comment>
<pub-id pub-id-type="pmid">24808398</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Coren</surname>
<given-names>S</given-names>
</name>
(
<year>1993</year>
)
<article-title>Measurement of handedness via self-report: the relationship between brief and extended inventories</article-title>
.
<source>Perceptual and motor skills</source>
<volume>76</volume>
:
<fpage>1035</fpage>
<lpage>1042</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.2466/pms.1993.76.3.1035">10.2466/pms.1993.76.3.1035</ext-link>
</comment>
<pub-id pub-id-type="pmid">8321574</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0117017.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>Reed</surname>
<given-names>CL</given-names>
</name>
(
<year>2009</year>
)
<article-title>Haptic exploration</article-title>
.
<source>Scholarpedia</source>
<volume>4</volume>
:
<fpage>7941</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.4249/scholarpedia.7941">10.4249/scholarpedia.7941</ext-link>
</comment>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Bergmann Tiest, Wouter M" sort="Bergmann Tiest, Wouter M" uniqKey="Bergmann Tiest W" first="Wouter M." last="Bergmann Tiest">Wouter M. Bergmann Tiest</name>
<name sortKey="Jansen, Sander E M" sort="Jansen, Sander E M" uniqKey="Jansen S" first="Sander E. M." last="Jansen">Sander E. M. Jansen</name>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000555 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000555 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4319767
   |texte=   Haptic Exploratory Behavior During Object Discrimination: A Novel Automatic Annotation Method
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:25658703" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024