Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Feeling Voices

Identifieur interne : 002244 ( Pmc/Curation ); précédent : 002243; suivant : 002245

Feeling Voices

Auteurs : Paolo Ammirante [Canada] ; Frank A. Russo [Canada] ; Arla Good [Canada] ; Deborah I. Fels [Canada]

Source :

RBID : PMC:3547010

Abstract

Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in Experiment 1 (median percent correct = 83%) and pairs of vowel utterances in Experiment 2 (median percent correct = 75%). Greater difference in spectral tilt between “different” pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.


Url:
DOI: 10.1371/journal.pone.0053585
PubMed: 23341954
PubMed Central: 3547010

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3547010

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Feeling Voices</title>
<author>
<name sortKey="Ammirante, Paolo" sort="Ammirante, Paolo" uniqKey="Ammirante P" first="Paolo" last="Ammirante">Paolo Ammirante</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Russo, Frank A" sort="Russo, Frank A" uniqKey="Russo F" first="Frank A." last="Russo">Frank A. Russo</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Good, Arla" sort="Good, Arla" uniqKey="Good A" first="Arla" last="Good">Arla Good</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Fels, Deborah I" sort="Fels, Deborah I" uniqKey="Fels D" first="Deborah I." last="Fels">Deborah I. Fels</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Centre for Learning Technologies, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Centre for Learning Technologies, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23341954</idno>
<idno type="pmc">3547010</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3547010</idno>
<idno type="RBID">PMC:3547010</idno>
<idno type="doi">10.1371/journal.pone.0053585</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">002244</idno>
<idno type="wicri:Area/Pmc/Curation">002244</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Feeling Voices</title>
<author>
<name sortKey="Ammirante, Paolo" sort="Ammirante, Paolo" uniqKey="Ammirante P" first="Paolo" last="Ammirante">Paolo Ammirante</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Russo, Frank A" sort="Russo, Frank A" uniqKey="Russo F" first="Frank A." last="Russo">Frank A. Russo</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Good, Arla" sort="Good, Arla" uniqKey="Good A" first="Arla" last="Good">Arla Good</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Department of Psychology, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Fels, Deborah I" sort="Fels, Deborah I" uniqKey="Fels D" first="Deborah I." last="Fels">Deborah I. Fels</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Centre for Learning Technologies, Ryerson University, Toronto, Canada</addr-line>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea>Centre for Learning Technologies, Ryerson University, Toronto</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
(median percent correct = 83%) and pairs of vowel utterances in
<xref ref-type="sec" rid="s3">Experiment 2</xref>
(median percent correct = 75%). Greater difference in spectral tilt between “different” pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Gault, Rh" uniqKey="Gault R">RH Gault</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reed, Cm" uniqKey="Reed C">CM Reed</name>
</author>
<author>
<name sortKey="Rabinowitz, Wm" uniqKey="Rabinowitz W">WM Rabinowitz</name>
</author>
<author>
<name sortKey="Durlach, Ni" uniqKey="Durlach N">NI Durlach</name>
</author>
<author>
<name sortKey="Braida, Ld" uniqKey="Braida L">LD Braida</name>
</author>
<author>
<name sortKey="Conway Fithian, S" uniqKey="Conway Fithian S">S Conway-Fithian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brooks, Pl" uniqKey="Brooks P">PL Brooks</name>
</author>
<author>
<name sortKey="Frost, Bj" uniqKey="Frost B">BJ Frost</name>
</author>
<author>
<name sortKey="Mason, Jl" uniqKey="Mason J">JL Mason</name>
</author>
<author>
<name sortKey="Gibson, Dm" uniqKey="Gibson D">DM Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brooks, Pl" uniqKey="Brooks P">PL Brooks</name>
</author>
<author>
<name sortKey="Frost, Bj" uniqKey="Frost B">BJ Frost</name>
</author>
<author>
<name sortKey="Mason, Jl" uniqKey="Mason J">JL Mason</name>
</author>
<author>
<name sortKey="Gibson, Dm" uniqKey="Gibson D">DM Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brooks, Pl" uniqKey="Brooks P">PL Brooks</name>
</author>
<author>
<name sortKey="Frost, Bj" uniqKey="Frost B">BJ Frost</name>
</author>
<author>
<name sortKey="Mason, Jl" uniqKey="Mason J">JL Mason</name>
</author>
<author>
<name sortKey="Chung, K" uniqKey="Chung K">K Chung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russo, Fa" uniqKey="Russo F">FA Russo</name>
</author>
<author>
<name sortKey="Ammirante, P" uniqKey="Ammirante P">P Ammirante</name>
</author>
<author>
<name sortKey="Fels, Di" uniqKey="Fels D">DI Fels</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karam, M" uniqKey="Karam M">M Karam</name>
</author>
<author>
<name sortKey="Russo, Fa" uniqKey="Russo F">FA Russo</name>
</author>
<author>
<name sortKey="Branje, C" uniqKey="Branje C">C Branje</name>
</author>
<author>
<name sortKey="Price, E" uniqKey="Price E">E Price</name>
</author>
<author>
<name sortKey="Fels, Di" uniqKey="Fels D">DI Fels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karam, M" uniqKey="Karam M">M Karam</name>
</author>
<author>
<name sortKey="Russo, Fa" uniqKey="Russo F">FA Russo</name>
</author>
<author>
<name sortKey="Fels, Di" uniqKey="Fels D">DI Fels</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hansen, Hm" uniqKey="Hansen H">HM Hansen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hansen, Hm" uniqKey="Hansen H">HM Hansen</name>
</author>
<author>
<name sortKey="Chuang, Es" uniqKey="Chuang E">ES Chuang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, M" uniqKey="Gordon M">M Gordon</name>
</author>
<author>
<name sortKey="Ladefoged, P" uniqKey="Ladefoged P">P Ladefoged</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dicanio, Ct" uniqKey="Dicanio C">CT DiCanio</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bolanowski, Sj" uniqKey="Bolanowski S">SJ Bolanowski</name>
</author>
<author>
<name sortKey="Gescheider, Ga" uniqKey="Gescheider G">GA Gescheider</name>
</author>
<author>
<name sortKey="Verrillo, Rt" uniqKey="Verrillo R">RT Verrillo</name>
</author>
<author>
<name sortKey="Checkosky, Cm" uniqKey="Checkosky C">CM Checkosky</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verrillo, R" uniqKey="Verrillo R">R Verrillo</name>
</author>
<author>
<name sortKey="Gescheider, Ga" uniqKey="Gescheider G">GA Gescheider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marks, Le" uniqKey="Marks L">LE Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kramer, Se" uniqKey="Kramer S">SE Kramer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vy, Qv" uniqKey="Vy Q">QV Vy</name>
</author>
<author>
<name sortKey="Fels, Di" uniqKey="Fels D">DI Fels</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23341954</article-id>
<article-id pub-id-type="pmc">3547010</article-id>
<article-id pub-id-type="publisher-id">PONE-D-12-28439</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0053585</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychoacoustics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Auditory System</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine</subject>
<subj-group>
<subject>Mental Health</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Experimental Psychology</subject>
<subject>Psychophysics</subject>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Experimental Psychology</subject>
<subject>Psychophysics</subject>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Feeling Voices</article-title>
<alt-title alt-title-type="running-head">Feeling Voices</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ammirante</surname>
<given-names>Paolo</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Russo</surname>
<given-names>Frank A.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Good</surname>
<given-names>Arla</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Fels</surname>
<given-names>Deborah I.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Psychology, Ryerson University, Toronto, Canada</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Centre for Learning Technologies, Ryerson University, Toronto, Canada</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Ptito</surname>
<given-names>Maurice</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Montreal, Canada</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>russo@ryerson.ca</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: FAR AG. Performed the experiments: AG. Analyzed the data: PA FAR. Contributed reagents/materials/analysis tools: DIF FAR. Wrote the paper: PA FAR.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>16</day>
<month>1</month>
<year>2013</year>
</pub-date>
<volume>8</volume>
<issue>1</issue>
<elocation-id>e53585</elocation-id>
<history>
<date date-type="received">
<day>17</day>
<month>9</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>3</day>
<month>12</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-year>2013</copyright-year>
<copyright-holder>Ammirante et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
(median percent correct = 83%) and pairs of vowel utterances in
<xref ref-type="sec" rid="s3">Experiment 2</xref>
(median percent correct = 75%). Greater difference in spectral tilt between “different” pairs strongly predicted their discriminability in both experiments. The current findings support the hypothesis that discrimination of complex vibrotactile stimuli involves the cortical integration of spectral information filtered through frequency-tuned skin receptors.</p>
</abstract>
<funding-group>
<funding-statement>This research was supported by a Discovery grant awarded to the second author from the Natural Sciences and Engineering Research Council of Canada (Reference #: 341583-07), and Graphics, Animation and New Media (GRAND) Canada, a federally funded Network of Centres of Excellence (Reference #: G-NI-10-Ry-01). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="5"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>The investigation of haptic speech perception has a long history. In 1924, for example, Gault
<xref ref-type="bibr" rid="pone.0053585-Gault1">[1]</xref>
trained an artificially-deafened subject to identify thirty-four spoken words presented to his palm through a tube. The Tadoma method, developed around the same time to facilitate speech perception in deafblind individuals, involves placing thumb and fingers on a talker's lips and jawline, respectively
<xref ref-type="bibr" rid="pone.0053585-Reed1">[2]</xref>
. In the 1980s, Brooks and colleagues investigated speech perception using tactile vocoders that filter an acoustic waveform and transduce it into vibratory patterns that are felt on the skin
<xref ref-type="bibr" rid="pone.0053585-Brooks1">[3]</xref>
<xref ref-type="bibr" rid="pone.0053585-Brooks3">[5]</xref>
. Using this apparatus, they trained an individual to acquire a 250-word vocabulary
<xref ref-type="bibr" rid="pone.0053585-Brooks3">[5]</xref>
. Correct identification of the number of syllables and stress patterns of incorrectly identified words suggest the haptic sense was used to track the amplitude envelope of speech as it unfolds over time. Here we evaluate vibrotactile sensitivity to spectral information contained in speech.</p>
<p>Our previous study on vibrotactile discrimination of musical timbre
<xref ref-type="bibr" rid="pone.0053585-Russo1">[6]</xref>
is, to our knowledge, the only published study to directly investigate vibrotactile sensitivity to spectral information. Tones produced by a musical instrument and voiced sounds produced by the vocal cords are complex periodic waveforms. Component frequencies of such waveforms include the
<italic>fundamental frequency (F0)</italic>
, which is usually associated with the perceived pitch of a musical tone, and
<italic>harmonics</italic>
at integer multiples of F0. The resonance properties of a musical instrument or the vocal tract give rise to frequency bands of higher amplitude called
<italic>formants</italic>
that boost those harmonics falling within it. The timbres of different musical instruments or different voices producing the same sound (e.g., a tone at middle C [262 Hz] or a vowel at 220 Hz) are differentiated in part by the relative amplitudes of F0 and its harmonics, i.e., frequency spectrum. We found that artificially-deafened individuals as well as a sample of individuals from the deaf and hard-of-hearing (DHH) community readily discriminated by touch alone piano, cello, and trombone tones matched for F0, duration, and perceived magnitude, and synthesized tones that differed only in spectral content.</p>
<p>In the current study, DHH individuals were recruited to investigate vibrotactile discrimination of identical sentence (
<xref ref-type="sec" rid="s2">Experiment 1</xref>
) and vowel utterances (
<xref ref-type="sec" rid="s3">Experiment 2</xref>
) from same-sex talkers matched for F0, duration, and perceived magnitude. On the one hand, based on our previous findings, we expect that spectral differences between same-sex talkers, i.e., inter-individual differences in formant frequencies and/or their relative amplitudes, should lead to vibrotactile discrimination.</p>
<p>On the other hand, there are at least two reasons to anticipate difficulties with the task. First, whereas the timbres of musical instruments vary greatly according to the unique resonance properties of the materials with which they are constructed, inter-individual differences in vocal timbre, being governed primarily by modest differences in vocal tract morphology, are less pronounced
<xref ref-type="bibr" rid="pone.0053585-Chasin1">[7]</xref>
. Second, whereas in our previous study
<xref ref-type="bibr" rid="pone.0053585-Russo1">[6]</xref>
, variation in amplitude and spectrum over the course of each 2 s stimulus was either highly constrained (musical instrument samples) or absent (synthesized tones), the current 2 s speech stimuli are characterized by numerous rapid changes. These include transient changes in amplitude, such as at syllable onsets in sentences, and spectro-temporal change, such as formant transitions occurring during the articulation of vowel sounds. These dynamic changes might obscure spectrum and lead to poor discrimination.</p>
</sec>
<sec id="s2">
<title>Experiment 1</title>
<sec id="s2a">
<title>Methods</title>
<sec id="s2a1">
<title>Ethics Statement</title>
<p>The study was approved by the Ryerson Ethics Board (REB) at Ryerson University and was conducted according to their human subject guidelines. Participation in the study was agreed to in writing by signing an REB-approved consent form.</p>
</sec>
<sec id="s2a2">
<title>Participants</title>
<p>Nineteen individuals (9 females) aged 23–64 (M = 42.1; SD = 12.9) were recruited from Toronto's deaf community. All had participated in a previous study
<xref ref-type="bibr" rid="pone.0053585-Russo1">[6]</xref>
. Participants were compensated $20.</p>
</sec>
<sec id="s2a3">
<title>Apparatus</title>
<p>Complex vibrotactile waveforms were driven by an acoustic signal and presented to the back via a pair of voice coils embedded in a conforming chair
<xref ref-type="bibr" rid="pone.0053585-Karam1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0053585-Karam2">[9]</xref>
. The voice coils were 1 inch in diameter and made contact with the left and right sides of the lumbar region of the back.</p>
<p>Eleven participants reported some hearing at high intensity and five of these participants wore hearing aids. To eliminate any possibility of auditory stimulation, all participants wore sound attenuating earmuffs with a noise reduction rating of 26 dB, and those with hearing aids were asked to turn their devices off for the duration of the experiment.</p>
</sec>
<sec id="s2a4">
<title>Stimuli & Procedure</title>
<p>Stimuli were six recorded utterances of the sentence “Can you tell who is singing this /ei/?”: one utterance from each of three different female talkers and one from each of three different male talkers. Recordings were made at a sampling rate of 44.1 kHz using a Rode NTK microphone. Talkers attempted to match their utterances in duration and F0 to a 2 s standard tone presented at F0 of 220 Hz for female talkers and 110 Hz for male talkers. Since F0 continuously varies in speech (and is thus unlikely to act as a stable perceptual cue to participants), the standard tone served to broadly center F0 and discourage gross deviations in range. All utterances were shorter than the target duration (M = 1.72 s; SD = .09); the maximum difference in duration between utterances was 12%. The maximum difference in F0 semi-interquartile range between utterances did not exceed 1 semitone. All stimuli were also equated for perceived magnitude of vibration. Average ratings were taken from three normal hearing judges who iteratively adjusted the magnitude of a target stimulus until it was perceived to match a standard. Judges were artificially deafened to the sound output of the voice coils by white noise presented over headphones to mask air-conducted sound and a vibrotactile stimulus applied to the mastoid bone to mask bone-conducted sound
<xref ref-type="bibr" rid="pone.0053585-Russo1">[6]</xref>
.</p>
<p>Participants made same/different judgments for stimulus pairs presented with an inter-stimulus interval of 1 s. A practice block of 5 trials with feedback was followed by 2 experimental blocks of trials without feedback. Only the latter were entered into analysis. Each experimental block presented pairs from either the three female or three male talkers, and the order of presentation of the blocks was randomized. Within blocks, all possible talker pairs were presented once (3 talkers squared = 9 trials). Thus, one-third of the pairs were same and two-thirds were different. A professional sign language interpreter delivered instructions to participants using American Sign Language.</p>
</sec>
</sec>
<sec id="s2b">
<title>Results</title>
<p>Separate two-tailed binomial tests were performed on each participant's responses across all stimuli, and revealed percent correct to be significantly above chance (
<italic>p</italic>
<.05) in 14 of 19 participants, Mdn = 83.33%. (d' values obtained from a signal detection analysis
<xref ref-type="bibr" rid="pone.0053585-Macmillan1">[10]</xref>
indicated that responses in both experiments were unbiased:
<xref ref-type="sec" rid="s2">Experiment 1</xref>
[Mdn = 2.54];
<xref ref-type="sec" rid="s3">Experiment 2</xref>
[Mdn = 2.35].)</p>
<p>As shown in
<xref ref-type="fig" rid="pone-0053585-g001">Figure 1</xref>
, percent correct was higher for female talkers than male talkers, but Friedman's ANOVA revealed no significant effect of sex,
<italic>F</italic>
(1) = 2.57,
<italic>p</italic>
<.11.</p>
<fig id="pone-0053585-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Boxplots of percent correct by participant across two conditions.</title>
<p>Median is shown as a bolded line. Lower and upper edges of the boxes indicate lower and upper quartiles, respectively, and whiskers indicate sample minima and maxima.</p>
</caption>
<graphic xlink:href="pone.0053585.g001"></graphic>
</fig>
<p>
<xref ref-type="table" rid="pone-0053585-t001">Table 1</xref>
shows percent correct by stimulus. As there were no effects of order of presentation for “different” pairs, percent correct was collapsed across complementary pairs (e.g., “Male-B/Male-C” and “Male-C/Male-B”). Binomial tests revealed percent correct to be significantly above chance (
<italic>p</italic>
<.05) for all stimulus pairs except Female-A/Female-B.</p>
<table-wrap id="pone-0053585-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.t001</object-id>
<label>Table 1</label>
<caption>
<title>Percent correct by stimulus based on 19 responses for “same” sentences, and on 38 responses collapsed across complementary “different” pairs.</title>
</caption>
<alternatives>
<graphic id="pone-0053585-t001-1" xlink:href="pone.0053585.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Female-A</td>
<td align="left" rowspan="1" colspan="1">Female-B</td>
<td align="left" rowspan="1" colspan="1">Female-C</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-A</td>
<td align="left" rowspan="1" colspan="1">Male-B</td>
<td align="left" rowspan="1" colspan="1">Male-C</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Female-A</td>
<td align="left" rowspan="1" colspan="1">95
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-A</td>
<td align="left" rowspan="1" colspan="1">100
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Female-B</td>
<td align="left" rowspan="1" colspan="1">45</td>
<td align="left" rowspan="1" colspan="1">95
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-B</td>
<td align="left" rowspan="1" colspan="1">66
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">89
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Female-C</td>
<td align="left" rowspan="1" colspan="1">97
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">100
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">95
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">Male-C</td>
<td align="left" rowspan="1" colspan="1">74
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">66
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">74
<xref ref-type="table-fn" rid="nt101">*</xref>
</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label>*</label>
<p>
<italic>p</italic>
<.05.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="s3">
<title>Experiment 2</title>
<sec id="s3a">
<title>Methods</title>
<sec id="s3a1">
<title>Participants</title>
<p>The same 19 individuals completed
<xref ref-type="sec" rid="s3">Experiment 2</xref>
on the same day as
<xref ref-type="sec" rid="s2">Experiment 1</xref>
.</p>
</sec>
<sec id="s3a2">
<title>Apparatus</title>
<p>The apparatus was identical to
<xref ref-type="sec" rid="s2">Experiment 1</xref>
.</p>
</sec>
<sec id="s3a3">
<title>Stimuli & Procedure</title>
<p>Stimuli were six recorded utterances of the dipthong /ei/ made by the same three female and three male talkers as in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
. Talkers attempted to match their utterances in F0 to a standard tone presented at either low or high pitch. For females, low pitch was 220 Hz and high pitch was 440 Hz; for males, low pitch was 120 Hz and high pitch was 220 Hz. In all utterances, F0 minimally deviated from these targets (M = 22 cents; SD = 17.2). The central 2 s portion of each vowel utterance was extracted using audio editing software. All stimuli were equated for perceived magnitude using the same protocol as
<xref ref-type="sec" rid="s2">Experiment 1</xref>
.</p>
<p>Procedures used were identical to
<xref ref-type="sec" rid="s2">Experiment 1</xref>
, but there were 4 blocks of experimental trials: Female/low pitch, Female/high pitch, Male/low pitch, and Male/high pitch. The order of presentation of both trials within blocks and the blocks themselves was randomized. Within blocks, all possible talker pairs were presented once. Each block presented pairs from either the three female or three male talkers, for a total of 36 trials.</p>
</sec>
</sec>
<sec id="s3b">
<title>Results</title>
<p>Separate two-tailed binomial tests on each participant's responses across conditions showed percent correct to be significantly above chance (
<italic>p</italic>
<.05) in 17 of 19 participants, Mdn = 75%. Neither of the two participants scoring at chance in
<xref ref-type="sec" rid="s3">Experiment 2</xref>
scored at chance in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
.</p>
<p>As shown in
<xref ref-type="fig" rid="pone-0053585-g002">Figure 2</xref>
and consistent with a trend observed in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
, percent correct was lower for low-pitched male talkers, but Friedman's ANOVA revealed no significant difference in percent correct between conditions,
<italic>F</italic>
(3) = 1.28,
<italic>p</italic>
<.74.</p>
<fig id="pone-0053585-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Boxplots of percent correct by participant across four conditions.</title>
<p>Outliers are shown as circles.</p>
</caption>
<graphic xlink:href="pone.0053585.g002"></graphic>
</fig>
<p>
<xref ref-type="table" rid="pone-0053585-t002">Table 2</xref>
shows percent correct by stimulus after collapsing across complementary “different” pairs and across low- and high-pitched vowels within sex. Binomial tests showed percent correct to be significantly above chance (
<italic>p</italic>
<.05) for 10 of 12 talker pairs. Interestingly, as in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
, percent correct was at chance for Female-A/Female-B. Moreover, percent correct for corresponding talker pairs in Experiments 1 and 2 was significantly correlated,
<italic>r</italic>
(10) = .79,
<italic>p</italic>
<.01. Taken together, these data suggest participants relied on global cues common to spoken sentences and sung vowels rather than local timing differences between talker pairs, such as the timing of syllable onsets, available in sentences but not vowels.</p>
<table-wrap id="pone-0053585-t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.t002</object-id>
<label>Table 2</label>
<caption>
<title>Percent correct by stimulus based on 38 responses for “same” vowels, and on 76 responses collapsed across complementary “different” pairs.</title>
</caption>
<alternatives>
<graphic id="pone-0053585-t002-2" xlink:href="pone.0053585.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Female-A</td>
<td align="left" rowspan="1" colspan="1">Female-B</td>
<td align="left" rowspan="1" colspan="1">Female-C</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-A</td>
<td align="left" rowspan="1" colspan="1">Male-B</td>
<td align="left" rowspan="1" colspan="1">Male-C</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Female-A</td>
<td align="left" rowspan="1" colspan="1">92
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-A</td>
<td align="left" rowspan="1" colspan="1">95
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Female-B</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">84
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Male-B</td>
<td align="left" rowspan="1" colspan="1">70
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">79
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Female-C</td>
<td align="left" rowspan="1" colspan="1">84
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">90
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">90
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">Male-C</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">79
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
<td align="left" rowspan="1" colspan="1">90
<xref ref-type="table-fn" rid="nt102">*</xref>
</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt102">
<label>*</label>
<p>
<italic>p</italic>
<.05.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<sec id="s3b1">
<title>Acoustic Analysis</title>
<p>An acoustic analysis of the stimuli was conducted to test the hypothesis that, for both spoken sentences and sung vowels, larger global differences between “different” talker pairs should result in those pairs being more discriminable. In order to gain more statistical power, data from both orders of presentation were included for each pair. Root-mean-square voltage was first measured to verify that global differences in intensity were not driving discriminability between talkers. This is a measure of overall energy in the signal irrespective of frequency content. Percent correct was not significantly correlated with the absolute difference in root-mean-square voltage between stimuli either for the 12 “different” spoken sentences pairs,
<italic>r</italic>
(10) = .36,
<italic>p</italic>
<.25, or the 24 “different” vowel pairs,
<italic>r</italic>
(22) = .14,
<italic>p</italic>
<.53.</p>
<p>Next, a global spectral measure was used to investigate whether such information may have guided discrimination of both spoken sentences and sung vowels.
<italic>Spectral tilt</italic>
refers to the reduction of the high frequency spectrum relative to the low frequency spectrum. As shown in
<xref ref-type="fig" rid="pone-0053585-g003">Figure 3</xref>
, spectral tilt was measured here as H1-A3, or as the difference (in dB) between the amplitude of the first harmonic (H1) and the amplitude of the most prominent harmonic in the third formant (F3 = formant; A3 = harmonic)
<xref ref-type="bibr" rid="pone.0053585-Hansen1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0053585-Hansen2">[12]</xref>
. The acoustic correlate of H1-A3 is breathiness; a breathy voice with stronger H1 has a large spectral tilt, while a creaky voice with more energy at A3 has a small spectral tilt
<xref ref-type="bibr" rid="pone.0053585-Gordon1">[13]</xref>
.</p>
<fig id="pone-0053585-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Spectral slices (50 msec in length, starting from 1 s) of 220 Hz vowel utterances.</title>
<p>F0 and its harmonics are sharp peaks at 220 Hz and integer multiples. Formants (F1, F2, F3) can be seen as shallower peaks each containing multiple harmonics. (left panel) Where spectral tilt (H1-A3) was nearly identical between Female-A and Female-B, responses were at chance (40% correct); (right panel) difference in spectral tilt between Male-B and Male-C was large and elicited 87% correct responses.</p>
</caption>
<graphic xlink:href="pone.0053585.g003"></graphic>
</fig>
<p>Spectral tilt was estimated using the acoustic analysis software Praat
<xref ref-type="bibr" rid="pone.0053585-Boersma1">[14]</xref>
for the 18 recordings used as stimuli from Experiments 1 and 2. For
<xref ref-type="sec" rid="s2">Experiment 1</xref>
, the voiced portions of the sentences were extracted for further analysis (mean duration after extraction = 1198 msec); for
<xref ref-type="sec" rid="s3">Experiment 2</xref>
, the entire 2 s vowel utterance was used. For each recording, after downsampling to 16 kHz, F3 was estimated using linear prediction at regular temporal intervals. Next, the long-term average spectrum was calculated using a 100 Hz bandwidth at each of 12 equally-spaced intervals (i.e., ∼100 msec) for sentences and at each of 20 equally-spaced intervals (i.e., 100 msec) for vowels. For each interval, H1 and A3 were identified in the frequency spectrum as the maximum amplitude peaks within 10% of the frequencies of F0 and F3, respectively, and A3 was subtracted from H1
<xref ref-type="bibr" rid="pone.0053585-DiCanio1">[15]</xref>
. Global estimates of spectral tilt for each stimulus were obtained by averaging these values across intervals. Finally, for each “different” pair, the absolute difference in global spectral tilt between stimuli was obtained.</p>
<p>Percent correct was correlated with the absolute difference in global spectral tilt for “different” pairs. Significant correlations were observed for the sentences in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
,
<italic>r</italic>
(10) = .73,
<italic>p</italic>
<.01, for the vowels in
<xref ref-type="sec" rid="s3">Experiment 2</xref>
,
<italic>r</italic>
(22) = .68,
<italic>p</italic>
<.001, and across both experiments,
<italic>r</italic>
(34) = .67,
<italic>p</italic>
<.0001 (as shown in
<xref ref-type="fig" rid="pone-0053585-g004">Figure 4</xref>
). These findings suggest that participants used global spectral cues available in both spoken sentences and sung vowels to discriminate talker pairs.</p>
<fig id="pone-0053585-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0053585.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Correlation between the absolute difference in global spectral tilt and percent correct.</title>
</caption>
<graphic xlink:href="pone.0053585.g004"></graphic>
</fig>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Two experiments investigated deaf individuals' ability to discriminate between same-sex talkers based on vibrotactile stimulation alone. Nineteen participants made same/different judgments on pairs of utterances presented to the lower back through voice coils embedded in a conforming chair. Discrimination of stimuli matched for F0, duration, and perceived magnitude was successful for pairs of spoken sentences in
<xref ref-type="sec" rid="s2">Experiment 1</xref>
(median percent correct = 83%) and pairs of vowel utterances in
<xref ref-type="sec" rid="s3">Experiment 2</xref>
(median percent correct = 75%). The finding that discrimination was correlated for sentences and vowels suggests that participants were largely insensitive to local transient changes in amplitude, such as syllable onset available in spoken sentences, and spectro-temporal changes, such as formant transitions occurring in the talkers' articulation of vowel sounds. Moreover, analysis of global spectrum averaged across each utterance showed greater absolute difference in spectral tilt between stimuli in “different” pairs to be a strong predictor of their discriminability for both sentences and vowels. Taken together, these data suggest participants relied on more stable spectral cues available in both sets of stimuli.</p>
<p>How does vibrotactile sensitivity to spectral information arise? In audition, the tonotopic organization of frequency-tuned cells filters complex sounds into their component frequencies in critical bands of about one-third of an octave. Timbral discrimination is thought to follow from the cortical integration of the relative amplitudes of these signals in a process called profile analysis
<xref ref-type="bibr" rid="pone.0053585-Green1">[16]</xref>
.</p>
<p>Given that the identical mechanical energy gives rise to auditory and vibrotactile sensations of sound by bending and distorting cells in the ear and skin, respectively, it seems reasonable that information from a haptic filterbank can likewise be cortically integrated. Indeed, at least four types of frequency-tuned skin receptors are recognized
<xref ref-type="bibr" rid="pone.0053585-Bolanowski1">[17]</xref>
,
<xref ref-type="bibr" rid="pone.0053585-Birnbaum1">[18]</xref>
. For example, Pacinian corpuscles have peak sensitivity to vibration between 225 and 275 Hz and are found primarily within the dermis, while Meissner's corpuscles are most sensitive to vibration below 50 Hz and are found just below the epidermis. Evidence of a critical band function comes from studies showing that, as with auditory perception, perceived magnitude of pairs of pure tones presented to the skin either successively
<xref ref-type="bibr" rid="pone.0053585-Verrillo1">[19]</xref>
or simultaneously
<xref ref-type="bibr" rid="pone.0053585-Marks1">[20]</xref>
is summed only when the frequencies of the tones are widely-spaced.</p>
<p>In the current study, we demonstrate in a sample of DHH individuals the viability of the haptic sense for the discrimination of same-sex talkers. Talker identification is a disorienting problem that DHH individuals regularly face in vocational
<xref ref-type="bibr" rid="pone.0053585-Kramer1">[21]</xref>
and entertainment
<xref ref-type="bibr" rid="pone.0053585-Vy1">[22]</xref>
settings. The current findings suggest a valuable role for vibrotactile information that may be used to supplement assistive listening devices used by DHH individuals.</p>
</sec>
</body>
<back>
<ack>
<p>We acknowledge Julia Kim, Lisa Liskovoi, and Gabriel Nespoli for research assistance. We are indebted to the Canadian Cultural Society of the Deaf for assistance with participant recruitment, and to Kelly Ferguson for American Sign Language interpretation. Correspondence concerning this article should be addressed to Frank A. Russo, 350 Victoria Street, Toronto, Canada, M5B 2K3. E-mail:
<email>russo@ryerson.ca</email>
.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0053585-Gault1">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gault</surname>
<given-names>RH</given-names>
</name>
(
<year>1924</year>
)
<article-title>Progress in experiments on tactile interpretation of oral speech</article-title>
.
<source>J Abnorm Soc Psychol</source>
<volume>19</volume>
:
<fpage>155</fpage>
<lpage>159</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Reed1">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Reed</surname>
<given-names>CM</given-names>
</name>
,
<name>
<surname>Rabinowitz</surname>
<given-names>WM</given-names>
</name>
,
<name>
<surname>Durlach</surname>
<given-names>NI</given-names>
</name>
,
<name>
<surname>Braida</surname>
<given-names>LD</given-names>
</name>
,
<name>
<surname>Conway-Fithian</surname>
<given-names>S</given-names>
</name>
,
<etal>et al</etal>
(
<year>1985</year>
)
<article-title>Research on the Tadoma method of speech communication</article-title>
.
<source>J Acoust Soc Am</source>
<volume>77</volume>
(
<issue>1</issue>
)
<fpage>247</fpage>
<lpage>257</lpage>
<pub-id pub-id-type="pmid">3973218</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Brooks1">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brooks</surname>
<given-names>PL</given-names>
</name>
,
<name>
<surname>Frost</surname>
<given-names>BJ</given-names>
</name>
,
<name>
<surname>Mason</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>DM</given-names>
</name>
(
<year>1985</year>
)
<article-title>Continuing evaluation of the Queen's University Tactile Vocoder: I Identification of open-set words</article-title>
.
<source>J Rehabil Eng</source>
<volume>22</volume>
:
<fpage>119</fpage>
<lpage>128</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Brooks2">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brooks</surname>
<given-names>PL</given-names>
</name>
,
<name>
<surname>Frost</surname>
<given-names>BJ</given-names>
</name>
,
<name>
<surname>Mason</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>DM</given-names>
</name>
(
<year>1985</year>
)
<article-title>Continuing evaluation of the Queen's University Tactile Vocoder: II Identification of open set sentences and tracking</article-title>
.
<source>J Rehabil Eng</source>
<volume>22</volume>
:
<fpage>129</fpage>
<lpage>138</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Brooks3">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brooks</surname>
<given-names>PL</given-names>
</name>
,
<name>
<surname>Frost</surname>
<given-names>BJ</given-names>
</name>
,
<name>
<surname>Mason</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Chung</surname>
<given-names>K</given-names>
</name>
(
<year>1985</year>
)
<article-title>Identification of 250 words using a tactile vocoder</article-title>
.
<source>J Acoust Soc Am</source>
<volume>77</volume>
:
<fpage>1576</fpage>
<lpage>1579</lpage>
<pub-id pub-id-type="pmid">3157716</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Russo1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Russo</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Ammirante</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Fels</surname>
<given-names>DI</given-names>
</name>
(
<year>2012</year>
)
<article-title>Vibrotactile discrimination of musical timbre</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>38</volume>
(
<issue>4</issue>
)
<fpage>822</fpage>
<lpage>826</lpage>
<pub-id pub-id-type="pmid">22708743</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Chasin1">
<label>7</label>
<mixed-citation publication-type="other">Chasin M (2009) Hearing loss in musicians: Prevention and management. San Diego: Plural, 172 p.</mixed-citation>
</ref>
<ref id="pone.0053585-Karam1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Karam</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Russo</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Branje</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Price</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Fels</surname>
<given-names>DI</given-names>
</name>
(
<year>2008</year>
)
<article-title>Towards a model human cochlea</article-title>
.
<source>ACM International Conference Proceeding Series</source>
<volume>322</volume>
:
<fpage>267</fpage>
<lpage>274</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Karam2">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Karam</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Russo</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Fels</surname>
<given-names>DI</given-names>
</name>
(
<year>2009</year>
)
<article-title>Designing the model human cochlea: An ambient crossmodal audio-tactile display</article-title>
.
<source>IEEE Trans Haptics</source>
<volume>2</volume>
:
<fpage>160</fpage>
<lpage>169</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Macmillan1">
<label>10</label>
<mixed-citation publication-type="other">Macmillan NA, Creelman CD (2005) Detection theory: A user's guide. Mahwah, NJ: Lawrence Erlbaum, 492 p.</mixed-citation>
</ref>
<ref id="pone.0053585-Hansen1">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hansen</surname>
<given-names>HM</given-names>
</name>
(
<year>1997</year>
)
<article-title>Glottal characteristics of female speakers: Acoustic correlates</article-title>
.
<source>J Acoust Soc Am</source>
<volume>101</volume>
:
<fpage>466</fpage>
<lpage>481</lpage>
<pub-id pub-id-type="pmid">9000737</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Hansen2">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hansen</surname>
<given-names>HM</given-names>
</name>
,
<name>
<surname>Chuang</surname>
<given-names>ES</given-names>
</name>
(
<year>1999</year>
)
<article-title>Glottal characteristics of male speakers: Acoustic correlates and comparison with female data</article-title>
.
<source>J Acoust Soc Am</source>
<volume>106</volume>
:
<fpage>1064</fpage>
<lpage>1077</lpage>
<pub-id pub-id-type="pmid">10462811</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Gordon1">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gordon</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Ladefoged</surname>
<given-names>P</given-names>
</name>
(
<year>2001</year>
)
<article-title>Phonation types: A cross-linguistic overview</article-title>
.
<source>J Phon</source>
<volume>29</volume>
:
<fpage>383</fpage>
<lpage>406</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Boersma1">
<label>14</label>
<mixed-citation publication-type="other">Boersma P, Weenink D (2010) Praat: Doing phonetics by computer (Version 5.1.37) [Computer Program]. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.praat.org">http://www.praat.org</ext-link>
</mixed-citation>
</ref>
<ref id="pone.0053585-DiCanio1">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>DiCanio</surname>
<given-names>CT</given-names>
</name>
(
<year>2009</year>
)
<article-title>The phonetics of register in Takhian Thong Chong</article-title>
.
<source>J Int Phon Assoc</source>
<volume>39</volume>
:
<fpage>162</fpage>
<lpage>188</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Green1">
<label>16</label>
<mixed-citation publication-type="other">Green DM (1988) Profile analysis: Auditory intensity discrimination. New York: Oxford University Press, 144 p.</mixed-citation>
</ref>
<ref id="pone.0053585-Bolanowski1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bolanowski</surname>
<given-names>SJ</given-names>
<suffix>Jr</suffix>
</name>
,
<name>
<surname>Gescheider</surname>
<given-names>GA</given-names>
</name>
,
<name>
<surname>Verrillo</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Checkosky</surname>
<given-names>CM</given-names>
</name>
(
<year>1988</year>
)
<article-title>Four channels mediate the mechanical aspects of touch</article-title>
.
<source>J Acoust Soc Am</source>
<volume>84</volume>
:
<fpage>1680</fpage>
<lpage>1694</lpage>
<pub-id pub-id-type="pmid">3209773</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Birnbaum1">
<label>18</label>
<mixed-citation publication-type="other">Birnbaum DM, Wanderley MM (2007) A systematic approach to musical vibrotactile feedback. In Proceedings of the International Computer Music Conference (ICMC), Vol. 2. Denmark: ICMC, pp. 397–404.</mixed-citation>
</ref>
<ref id="pone.0053585-Verrillo1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Verrillo</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Gescheider</surname>
<given-names>GA</given-names>
</name>
(
<year>1975</year>
)
<article-title>Enhancement and summation in the perception of two successive vibrotactile stimuli</article-title>
.
<source>Percept Psychophys</source>
<volume>18</volume>
(
<issue>2</issue>
)
<fpage>128</fpage>
<lpage>136</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Marks1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Marks</surname>
<given-names>LE</given-names>
</name>
(
<year>1979</year>
)
<article-title>Summation of vibrotactile intensity: An analog to auditory critical bands?</article-title>
<source>Sens Processes</source>
<volume>3</volume>
:
<fpage>188</fpage>
<lpage>203</lpage>
<pub-id pub-id-type="pmid">232574</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0053585-Kramer1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kramer</surname>
<given-names>SE</given-names>
</name>
(
<year>2008</year>
)
<article-title>Hearing impairment, work, and vocational enablement</article-title>
.
<source>Int J Audiol</source>
<volume>47</volume>
(
<issue>2</issue>
)
<fpage>124</fpage>
<lpage>130</lpage>
</mixed-citation>
</ref>
<ref id="pone.0053585-Vy1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vy</surname>
<given-names>QV</given-names>
</name>
,
<name>
<surname>Fels</surname>
<given-names>DI</given-names>
</name>
(
<year>2009</year>
)
<article-title>Using avatars for improving speaker identification in captioning</article-title>
.
<source>Lect Notes Comput Sci</source>
<volume>5727</volume>
:
<fpage>916</fpage>
<lpage>919</lpage>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002244 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002244 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3547010
   |texte=   Feeling Voices
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:23341954" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024