Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.

Identifieur interne : 000326 ( PubMed/Corpus ); précédent : 000325; suivant : 000327

Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.

Auteurs : Scott D. Novich ; David M. Eagleman

Source :

RBID : pubmed:26080756

Abstract

Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which explains the poor identification performance of spatially encoded patterns. Hence, when using an array of vibrational motors, spatiotemporal sweeps can overcome the limitations of vibrotactile two-tacton resolution. The results provide the first steps toward obtaining a realistic estimate of the skin's achievable throughput, illustrating the best ways to encode data to the skin (using as many dimensions as possible) and how far such interfaces would need to be separated if using multiple arrays in parallel.

DOI: 10.1007/s00221-015-4346-1
PubMed: 26080756

Links to Exploration step

pubmed:26080756

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.</title>
<author>
<name sortKey="Novich, Scott D" sort="Novich, Scott D" uniqKey="Novich S" first="Scott D" last="Novich">Scott D. Novich</name>
<affiliation>
<nlm:affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Eagleman, David M" sort="Eagleman, David M" uniqKey="Eagleman D" first="David M" last="Eagleman">David M. Eagleman</name>
<affiliation>
<nlm:affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA. david@eaglemanlab.net.</nlm:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2015">2015</date>
<idno type="RBID">pubmed:26080756</idno>
<idno type="pmid">26080756</idno>
<idno type="doi">10.1007/s00221-015-4346-1</idno>
<idno type="wicri:Area/PubMed/Corpus">000326</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.</title>
<author>
<name sortKey="Novich, Scott D" sort="Novich, Scott D" uniqKey="Novich S" first="Scott D" last="Novich">Scott D. Novich</name>
<affiliation>
<nlm:affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Eagleman, David M" sort="Eagleman, David M" uniqKey="Eagleman D" first="David M" last="Eagleman">David M. Eagleman</name>
<affiliation>
<nlm:affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA. david@eaglemanlab.net.</nlm:affiliation>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental brain research</title>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2015" type="published">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which explains the poor identification performance of spatially encoded patterns. Hence, when using an array of vibrational motors, spatiotemporal sweeps can overcome the limitations of vibrotactile two-tacton resolution. The results provide the first steps toward obtaining a realistic estimate of the skin's achievable throughput, illustrating the best ways to encode data to the skin (using as many dimensions as possible) and how far such interfaces would need to be separated if using multiple arrays in parallel.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="In-Process">
<PMID Version="1">26080756</PMID>
<DateCreated>
<Year>2015</Year>
<Month>09</Month>
<Day>19</Day>
</DateCreated>
<Article PubModel="Print-Electronic">
<Journal>
<ISSN IssnType="Electronic">1432-1106</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>233</Volume>
<Issue>10</Issue>
<PubDate>
<Year>2015</Year>
<Month>Oct</Month>
</PubDate>
</JournalIssue>
<Title>Experimental brain research</Title>
<ISOAbbreviation>Exp Brain Res</ISOAbbreviation>
</Journal>
<ArticleTitle>Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.</ArticleTitle>
<Pagination>
<MedlinePgn>2777-88</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1007/s00221-015-4346-1</ELocationID>
<Abstract>
<AbstractText>Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which explains the poor identification performance of spatially encoded patterns. Hence, when using an array of vibrational motors, spatiotemporal sweeps can overcome the limitations of vibrotactile two-tacton resolution. The results provide the first steps toward obtaining a realistic estimate of the skin's achievable throughput, illustrating the best ways to encode data to the skin (using as many dimensions as possible) and how far such interfaces would need to be separated if using multiple arrays in parallel.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Novich</LastName>
<ForeName>Scott D</ForeName>
<Initials>SD</Initials>
<AffiliationInfo>
<Affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.</Affiliation>
</AffiliationInfo>
<AffiliationInfo>
<Affiliation>Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Eagleman</LastName>
<ForeName>David M</ForeName>
<Initials>DM</Initials>
<AffiliationInfo>
<Affiliation>Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA. david@eaglemanlab.net.</Affiliation>
</AffiliationInfo>
<AffiliationInfo>
<Affiliation>Department of Psychiatry, Baylor College of Medicine, Houston, TX, USA. david@eaglemanlab.net.</Affiliation>
</AffiliationInfo>
<AffiliationInfo>
<Affiliation>Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA. david@eaglemanlab.net.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2015</Year>
<Month>06</Month>
<Day>17</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Germany</Country>
<MedlineTA>Exp Brain Res</MedlineTA>
<NlmUniqueID>0043312</NlmUniqueID>
<ISSNLinking>0014-4819</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="N">Information transfer</Keyword>
<Keyword MajorTopicYN="N">Sensory substitution</Keyword>
<Keyword MajorTopicYN="N">Skin</Keyword>
<Keyword MajorTopicYN="N">Sound-to-touch</Keyword>
<Keyword MajorTopicYN="N">Vibrotactile</Keyword>
</KeywordList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2014</Year>
<Month>8</Month>
<Day>2</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2015</Year>
<Month>5</Month>
<Day>29</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="aheadofprint">
<Year>2015</Year>
<Month>6</Month>
<Day>17</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2015</Year>
<Month>6</Month>
<Day>18</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2015</Year>
<Month>6</Month>
<Day>18</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2015</Year>
<Month>6</Month>
<Day>18</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">26080756</ArticleId>
<ArticleId IdType="doi">10.1007/s00221-015-4346-1</ArticleId>
<ArticleId IdType="pii">10.1007/s00221-015-4346-1</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000326 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 000326 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:26080756
   |texte=   Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:26080756" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024