Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances

Identifieur interne : 002876 ( Ncbi/Merge ); précédent : 002875; suivant : 002877

I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances

Auteurs : Takeharu Seno [Japon] ; Keiko Ihaya [Japon] ; Yuki Yamada [Japon]

Source :

RBID : PMC:3738860

Abstract

Speed of utterance is an important factor in smooth and efficient conversation. We report a technique to increase utterance speed and that might improve a speaker's impression and information efficiency in conversation. We used a visual display consisting of optic flows in a large visual field that induced participants' illusory self-motion perception (vection). The speed of vection corresponded to the speed of the optic flows. Using this method, we investigated whether vection speed affects utterance speed. We presented fast- and slow-moving optic flow stimuli while dynamically swapping random dots presented to participants, during which time the participants were asked to talk for 2 min. Results revealed that the utterance speed was significantly faster in the fast optic flow condition. Our method could be a stepping stone for establishing a technique of modulating speech speed effectively.


Url:
DOI: 10.3389/fpsyg.2013.00494
PubMed: 23950749
PubMed Central: 3738860

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3738860

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances</title>
<author>
<name sortKey="Seno, Takeharu" sort="Seno, Takeharu" uniqKey="Seno T" first="Takeharu" last="Seno">Takeharu Seno</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Faculty of Design, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Research Center for Applied Perceptual Science, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institute for Advanced Study, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ihaya, Keiko" sort="Ihaya, Keiko" uniqKey="Ihaya K" first="Keiko" last="Ihaya">Keiko Ihaya</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Faculty of Medical Sciences, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Yamada, Yuki" sort="Yamada, Yuki" uniqKey="Yamada Y" first="Yuki" last="Yamada">Yuki Yamada</name>
<affiliation wicri:level="1">
<nlm:aff id="aff5">
<institution>Research Institute for Time Studies, Yamaguchi University</institution>
<country>Yamaguchi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23950749</idno>
<idno type="pmc">3738860</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3738860</idno>
<idno type="RBID">PMC:3738860</idno>
<idno type="doi">10.3389/fpsyg.2013.00494</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">001E76</idno>
<idno type="wicri:Area/Pmc/Curation">001E76</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001208</idno>
<idno type="wicri:Area/Ncbi/Merge">002876</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances</title>
<author>
<name sortKey="Seno, Takeharu" sort="Seno, Takeharu" uniqKey="Seno T" first="Takeharu" last="Seno">Takeharu Seno</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Faculty of Design, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Research Center for Applied Perceptual Science, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institute for Advanced Study, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ihaya, Keiko" sort="Ihaya, Keiko" uniqKey="Ihaya K" first="Keiko" last="Ihaya">Keiko Ihaya</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Faculty of Medical Sciences, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Yamada, Yuki" sort="Yamada, Yuki" uniqKey="Yamada Y" first="Yuki" last="Yamada">Yuki Yamada</name>
<affiliation wicri:level="1">
<nlm:aff id="aff5">
<institution>Research Institute for Time Studies, Yamaguchi University</institution>
<country>Yamaguchi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Speed of utterance is an important factor in smooth and efficient conversation. We report a technique to increase utterance speed and that might improve a speaker's impression and information efficiency in conversation. We used a visual display consisting of optic flows in a large visual field that induced participants' illusory self-motion perception (vection). The speed of vection corresponded to the speed of the optic flows. Using this method, we investigated whether vection speed affects utterance speed. We presented fast- and slow-moving optic flow stimuli while dynamically swapping random dots presented to participants, during which time the participants were asked to talk for 2 min. Results revealed that the utterance speed was significantly faster in the fast optic flow condition. Our method could be a stepping stone for establishing a technique of modulating speech speed effectively.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Ash, A" uniqKey="Ash A">A. Ash</name>
</author>
<author>
<name sortKey="Palmisano, S" uniqKey="Palmisano S">S. Palmisano</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J. Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buhusi, C V" uniqKey="Buhusi C">C. V. Buhusi</name>
</author>
<author>
<name sortKey="Meck, W H" uniqKey="Meck W">W. H. Meck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Droit Volet, S" uniqKey="Droit Volet S">S. Droit-Volet</name>
</author>
<author>
<name sortKey="Meck, W H" uniqKey="Meck W">W. H. Meck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fischer, M H" uniqKey="Fischer M">M. H. Fischer</name>
</author>
<author>
<name sortKey="Kornmuller, A E" uniqKey="Kornmuller A">A. E. Kornmüller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibbon, J" uniqKey="Gibbon J">J. Gibbon</name>
</author>
<author>
<name sortKey="Church, R M" uniqKey="Church R">R. M. Church</name>
</author>
<author>
<name sortKey="Meck, W H" uniqKey="Meck W">W. H. Meck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibbon, J" uniqKey="Gibbon J">J. Gibbon</name>
</author>
<author>
<name sortKey="Malapani, C" uniqKey="Malapani C">C. Malapani</name>
</author>
<author>
<name sortKey="Dale, C L" uniqKey="Dale C">C. L. Dale</name>
</author>
<author>
<name sortKey="Gallistel, C R" uniqKey="Gallistel C">C. R. Gallistel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helm, N A" uniqKey="Helm N">N. A. Helm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lecuyer, A" uniqKey="Lecuyer A">A. Lécuyer</name>
</author>
<author>
<name sortKey="Vidal, M" uniqKey="Vidal M">M. Vidal</name>
</author>
<author>
<name sortKey="Joly, O" uniqKey="Joly O">O. Joly</name>
</author>
<author>
<name sortKey="Megard, C" uniqKey="Megard C">C. Megard</name>
</author>
<author>
<name sortKey="Berthoz, A" uniqKey="Berthoz A">A. Berthoz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, B S" uniqKey="Lee B">B. S. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, P A" uniqKey="Lewis P">P. A. Lewis</name>
</author>
<author>
<name sortKey="Miall, R C" uniqKey="Miall R">R. C. Miall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mauk, M D" uniqKey="Mauk M">M. D. Mauk</name>
</author>
<author>
<name sortKey="Buonomano, D V" uniqKey="Buonomano D">D. V. Buonomano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meck, W H" uniqKey="Meck W">W. H. Meck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, N" uniqKey="Miller N">N. Miller</name>
</author>
<author>
<name sortKey="Maruyama, G" uniqKey="Maruyama G">G. Maruyama</name>
</author>
<author>
<name sortKey="Beaber, R J" uniqKey="Beaber R">R. J. Beaber</name>
</author>
<author>
<name sortKey="Valone, K" uniqKey="Valone K">K. Valone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Riecke, B E" uniqKey="Riecke B">B. E. Riecke</name>
</author>
<author>
<name sortKey="Feuereissen, D" uniqKey="Feuereissen D">D. Feuereissen</name>
</author>
<author>
<name sortKey="Rieser, J J" uniqKey="Rieser J">J. J. Rieser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sasaki, K" uniqKey="Sasaki K">K. Sasaki</name>
</author>
<author>
<name sortKey="Seno, T" uniqKey="Seno T">T. Seno</name>
</author>
<author>
<name sortKey="Yamada, Y" uniqKey="Yamada Y">Y. Yamada</name>
</author>
<author>
<name sortKey="Miura, K" uniqKey="Miura K">K. Miura</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seno, T" uniqKey="Seno T">T. Seno</name>
</author>
<author>
<name sortKey="Ito, H" uniqKey="Ito H">H. Ito</name>
</author>
<author>
<name sortKey="Sunaga, S" uniqKey="Sunaga S">S. Sunaga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seno, T" uniqKey="Seno T">T. Seno</name>
</author>
<author>
<name sortKey="Ogawa, M" uniqKey="Ogawa M">M. Ogawa</name>
</author>
<author>
<name sortKey="Ito, H" uniqKey="Ito H">H. Ito</name>
</author>
<author>
<name sortKey="Sunaga, S" uniqKey="Sunaga S">S. Sunaga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seno, T" uniqKey="Seno T">T. Seno</name>
</author>
<author>
<name sortKey="Kawabe, T" uniqKey="Kawabe T">T. Kawabe</name>
</author>
<author>
<name sortKey="Ito, H" uniqKey="Ito H">H. Ito</name>
</author>
<author>
<name sortKey="Sunaga, S" uniqKey="Sunaga S">S. Sunaga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, B L" uniqKey="Smith B">B. L. Smith</name>
</author>
<author>
<name sortKey="Brown, B L" uniqKey="Brown B">B. L. Brown</name>
</author>
<author>
<name sortKey="Strong, W J" uniqKey="Strong W">W. J. Strong</name>
</author>
<author>
<name sortKey="Rencher, A C" uniqKey="Rencher A">A. C. Rencher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watanabe, K" uniqKey="Watanabe K">K. Watanabe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, W G" uniqKey="Wright W">W. G. Wright</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yates, A J" uniqKey="Yates A">A. J. Yates</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23950749</article-id>
<article-id pub-id-type="pmc">3738860</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2013.00494</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Seno</surname>
<given-names>Takeharu</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<xref ref-type="author-notes" rid="fn003">
<sup></sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ihaya</surname>
<given-names>Keiko</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<xref ref-type="author-notes" rid="fn003">
<sup></sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yamada</surname>
<given-names>Yuki</given-names>
</name>
<xref ref-type="aff" rid="aff5">
<sup>5</sup>
</xref>
<xref ref-type="author-notes" rid="fn003">
<sup></sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Faculty of Design, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Research Center for Applied Perceptual Science, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Institute for Advanced Study, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Faculty of Medical Sciences, Kyushu University</institution>
<country>Fukuoka, Japan</country>
</aff>
<aff id="aff5">
<sup>5</sup>
<institution>Research Institute for Time Studies, Yamaguchi University</institution>
<country>Yamaguchi, Japan</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Noel Nguyen, Université d'Aix-Marseille, France</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Sarah Brown-Schmidt, University of Illinois, USA; Harold H. Greene, University of Detroit Mercy, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Takeharu Seno, Faculty of Design, Kyushu University, 4-9-1 Shiobaru, Minami-ku, Fukuoka 815-8540, Japan e-mail:
<email xlink:type="simple">seno@design.kyushu-u.ac.jp</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Frontiers in Cognitive Science, a specialty of Frontiers in Psychology.</p>
</fn>
<fn fn-type="present-address" id="fn003">
<p>†These authors have contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>8</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>4</volume>
<elocation-id>494</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>4</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>7</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2013 Seno, Ihaya and Yamada.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Speed of utterance is an important factor in smooth and efficient conversation. We report a technique to increase utterance speed and that might improve a speaker's impression and information efficiency in conversation. We used a visual display consisting of optic flows in a large visual field that induced participants' illusory self-motion perception (vection). The speed of vection corresponded to the speed of the optic flows. Using this method, we investigated whether vection speed affects utterance speed. We presented fast- and slow-moving optic flow stimuli while dynamically swapping random dots presented to participants, during which time the participants were asked to talk for 2 min. Results revealed that the utterance speed was significantly faster in the fast optic flow condition. Our method could be a stepping stone for establishing a technique of modulating speech speed effectively.</p>
</abstract>
<kwd-group>
<kwd>vection</kwd>
<kwd>utterance</kwd>
</kwd-group>
<counts>
<fig-count count="1"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="22"></ref-count>
<page-count count="5"></page-count>
<word-count count="4527"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Utterance speed is an important factor in smooth and efficient conversation. In addition, it is known that utterance speed offers clues in estimating the personality of another person. For example, rapid utterances tend to increase the impression that the speaker has a high degree of competence (Smith et al.,
<xref ref-type="bibr" rid="B20">1975</xref>
) and as such tend to promote persuasion (Miller et al.,
<xref ref-type="bibr" rid="B14">1976</xref>
). Moreover, relatively slower utterance speeds will reduce the amount of information conveyed during conversation, even though in certain contexts they can suggest that the speaker has a calm and gentle personality. For people who live a modern, fast-paced lifestyle, such as businesspersons, it is often necessary to verbally send large amounts of information to listeners within a given period of time (e.g., in the situation of a telephone call). Hence, it clearly seems that the development of techniques to increase utterance speed would be beneficial to improve the impression left by the speaker on the listener, as well as the information efficiency in such conversations.</p>
<p>A number of methods for modulating utterance speed have been developed, such as the pacing board (Helm,
<xref ref-type="bibr" rid="B7">1979</xref>
) and DAF [Delayed Auditory Feedback (Lee,
<xref ref-type="bibr" rid="B10">1950</xref>
; Yates,
<xref ref-type="bibr" rid="B23">1963</xref>
)]. The pacing board consists of a narrow board with seven one-foot long divisions. The speaker uses the board by pointing to a different division for each syllable being uttered. In DAF, delayed feedback of a speaker's own utterances is given to the speaker. These methods serve the purpose of reducing the overall speed of utterances. However, there are still no practically valid methods for increasing the speed of utterances. Moreover, the pacing board method requires speakers to always have their hands occupied, which can be inconvenient for making hand gestures. Therefore, hands-free methods for increasing utterance speed are needed. In this study, we attempted to develop a new technique based on visual presentation to increase the utterance speed of speakers. While our method also might limit speakers' behavior somewhat in terms of visual distraction, considering the fact that similar methods have already been used in information presentation technologies using augmented reality, e.g., projection onto the front glass of a car, our method should similarly be implementable with such technologies and optimized to minimize the visual load on speakers.</p>
<p>In the current study we focused on vection in which a class of motion perception. When stationary participants are exposed to a large visual motion field that simulates the retinal flow generated by self-translation or self-rotation, they often experience an illusory perception of self-motion; this phenomenon is known as vection (Fischer and Kornmüller,
<xref ref-type="bibr" rid="B4">1930</xref>
). Vection is inherently susceptible to sensory processing in modalities other than vision. For example, vection has been facilitated by locomotion (Seno et al.,
<xref ref-type="bibr" rid="B17">2011a</xref>
) and by wind to the face (Seno et al.,
<xref ref-type="bibr" rid="B18">2011b</xref>
). Furthermore, consistent vestibular input (Wright,
<xref ref-type="bibr" rid="B22">2009</xref>
), consistent head movements (Ash et al.,
<xref ref-type="bibr" rid="B1">2011</xref>
), and consistent somatosensory cues added to a hand also facilitate vection (Lécuyer et al.,
<xref ref-type="bibr" rid="B9">2004</xref>
). In addition, vection can be further facilitated by vibrations (subsonics) consistent with visual rotation (Riecke et al.,
<xref ref-type="bibr" rid="B15">2008</xref>
).</p>
<p>Note that vection and action are related, in particular with respect to speed. Although previous studies have not directly examined this relationship, considering the accumulated knowledge on the interplay between vection and other sensory processing, vection is likely to also interact with action. In a previous study, the speeds of visual stimuli and human action were reported; Watanabe (
<xref ref-type="bibr" rid="B21">2007</xref>
) reported that when participants watched fast-moving biological motion, their simple response time became shorter than when they observed slower biological motion, suggesting that, under certain circumstances, the speed of dynamic stimuli increases a participant's action speed. We hypothesized that this effect found by Watanabe could be expanded to other visual stimuli that induce not only the perception of object motion, but also self-motion. That is, the speed of visual stimuli comprising the optic flow may also affect action speed. We assume that an utterance represents a class of such actions; as such, it is possible that vection, which is induced by optic flows, affects utterance speed.</p>
<p>The present study aimed at investigating whether vection speed can modulate utterance speed. To this end, we employed fast and slow optic flow stimuli to induce fast and slow vection, respectively. In addition, dynamically swapping random dots that did not induce vection were used as control stimuli. We assumed that if the speed of vection governed the speed of utterance, then a participant's utterance speed would be accelerated when viewing the optic flow stimuli compared to the control stimuli. We hypothesized that the fast and slow optic flow conditions would accelerate utterance speed to a greater degree than the dynamic random dot condition, and that the degree of the modulation would be larger in the fast optic flow condition than in the slow condition.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec>
<title>Participants</title>
<p>Fifteen adult volunteers participated in the experiment. The participants were either graduate or undergraduate students, with no reported visual or vestibular abnormalities. All participants were naive as to the purpose of the present study.</p>
</sec>
<sec>
<title>Apparatus and stimuli</title>
<p>Stimulus images were generated and controlled by a computer (Apple, MB543J/A). These stimuli were presented on a plasma display (3D VIERA, 50 inches; Panasonic) with a 1024 × 768 pixel resolution at a 60-Hz refresh rate. The experiments were conducted in a dark chamber. The viewing distance was 57 cm. An IC recorder (Roland, R-09HR) was used to record the speech of the participants.</p>
<p>In the experiment, we presented three types of visual stimuli: fast and slow optic flow stimuli, and dynamic random dots (DRD). These three stimulus types corresponded to fast vection, slow vection, and the absence of vection, respectively. We used optic flow stimuli involving expansion and contraction. Stimuli were created by randomly positioning 16,000 dots inside a simulated cube, and then moving the participant's viewpoint to simulate forward self-motion of 32 or 1 m/s, corresponding to the fast or slow optic flow conditions, respectively. In addition, DRD were presented at 0.1 Hz (1240 dots/frame). The velocities of the dots ranged from 0 to 45°/s in the fast vection condition and from 0 to 1.4°/s in the slow vection condition, and no velocity (0°/s) in the DRD condition. The results confirmed that both the fast and slow optic flow stimuli induced substantial vection, and that the DRD stimuli did not induce any vection. The participants were instructed to gaze at the center of the screen. While the gaze direction was not specifically recorded, no participants reported that their gaze highly deviated from the center of the screen.</p>
</sec>
<sec>
<title>Procedure</title>
<p>In each trial, participants viewed each of the stimuli for the duration of the trial. All participants participated in all three experimental conditions. The order of conducting the three conditions was fully randomized. In each condition, the trial was repeated once. During stimulus presentation, participants were instructed to speak for 2 min on one of six topics provided by the experimenter. The topics were related to the self (hobbies, childhood, grade school days, university days, people they respected, and their personalities). Three of the six topics were randomly presented to the participants; specifically, the topic “childhood” was assigned to seven participants, “hobbies” to nine, “grade school days” to eight, “personality” to six, “respected person” to seven and “university days” was assigned to eight participants. The assigning of the six topics and the ordering of the three optic flow conditions were also randomized by the computer. All three conditions were successively conducted on the same day without a large temporal gap. The experimenter recorded all speech by an IC recorder. The experimenter initiated the speech by an oral cue such as “Please start.” The speech duration was defined as the 2-min period beginning from the point when the participants began to speak.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>A third person who did not know the purpose of the experiment calculated the total duration of the speech, the total number of morae it contained, and the utterance speed (morae/sec). Mora is a unit in phonology that determines syllable weight, which in Japanese languages determines stress or timing. Moreover, a whole utterance disruption, which is the period without sound or meaningful voices, was also calculated. For example, sounds like “Ah” or “Uh” were included in the disruption. The speech was analyzed using Audacity (The Audacity Team) and Wavez (Osamu Kurai) software. The coding criteria for the audio data remained constant for the duration of all analyses.</p>
<p>As shown in Figure
<xref ref-type="fig" rid="F1">1</xref>
, in the fast optic flow condition, the speech duration, number of morae, and utterance speed showed the longest, largest, and fastest results, respectively, among the stimulus conditions. A One-Way analysis of variance (ANOVA) with stimulus condition as a within-subject factor revealed a significant main effect of the three conditions in the three measures [duration:
<italic>F</italic>
<sub>(2, 14)</sub>
= 4.14,
<italic>p</italic>
< 0.03,
<italic>p</italic>
<sub>rep</sub>
= 0.94, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.28; mora:
<italic>F</italic>
<sub>(2, 14)</sub>
= 9.26,
<italic>p</italic>
< 0.0009,
<italic>p</italic>
<sub>rep</sub>
= 0.99, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.40; speed:
<italic>F</italic>
<sub>(2, 14)</sub>
= 9.67,
<italic>p</italic>
< 0.0007,
<italic>p</italic>
<sub>rep</sub>
= 0.99, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.41]. Multiple comparisons using Ryan's method revealed that utterance speed was significantly higher in the fast optic flow condition than in the slow optic flow and DRD conditions (
<italic>p</italic>
s < 0.006). Moreover, there were significantly more morae in the fast optic flow condition than in the slow optic flow and DRD conditions (
<italic>p</italic>
s < 0.01). Fast vection induced fast utterance speed. Moreover, differences in duration between the fast and slow conditions, between the fast and DRD conditions, and between the slow and DRD conditions were significant (
<italic>p</italic>
s < 0.05).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Results of the experiment</bold>
. The results for
<bold>(A)</bold>
number of morae,
<bold>(B)</bold>
speech duration, and
<bold>(C)</bold>
utterance speed in each of the stimulus conditions are shown. The labels “Fast” and “Slow” represent the results of the fast optic flow and slow optic flow conditions, respectively. Error bars denote the standard errors of the mean.</p>
</caption>
<graphic xlink:href="fpsyg-04-00494-g0001"></graphic>
</fig>
<p>Furthermore, we also calculated the average duration per mora. Then the results again showed that in the fast condition, duration of each mora was shortest in the fast optic flow condition. The mean values of duration/mora were 0.156 (
<italic>SD</italic>
= 0.022), 0.175 (
<italic>SD</italic>
= 0.032), and 0.268 (
<italic>SD</italic>
= 0.024) seconds for the fast, slow, and DRD conditions, respectively. A One-Way ANOVA revealed a significant main effect of the three conditions [
<italic>F</italic>
<sub>(2, 28)</sub>
= 8.42,
<italic>p</italic>
< 0.002,
<italic>p</italic>
<sub>rep</sub>
= 0.99, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.38].
<italic>Post-hoc</italic>
multiple comparisons (Ryan's method) revealed significant differences between the fast and the other two conditions [fast vs. slow:
<italic>t</italic>
<sub>(28)</sub>
= 4.07,
<italic>p</italic>
< 0.0004, Cohen's
<italic>d</italic>
= 2.18; fast vs. DRD:
<italic>t</italic>
<sub>(28)</sub>
= 2.51,
<italic>p</italic>
< 0.02, Cohen's
<italic>d</italic>
= 1.34] but there was no significant difference between the slow and DRD conditions [
<italic>t</italic>
<sub>(28)</sub>
= 1.55,
<italic>p</italic>
> 0.13, Cohen's
<italic>d</italic>
= 0.83]. These results clearly indicated that speakers produced each mora more quickly in the fast condition.</p>
<p>We speculate that the reason not only the number of morae but also the duration increased in the fast vection condition is because the fast vection condition might have activated language-processing mechanisms in the brain, which then induced a faster utterance speed, resulting in the differences observed in each utterance index.</p>
<p>We calculated the number of morae and duration corresponding to each topic. Results showed that the values were approximately 660 and 100, respectively, for all six topics. For example, for the topic “childhood,” the values for the mean number of morae and duration were 664.8 (
<italic>SD</italic>
= 181.7) and 103.2 (
<italic>SD</italic>
= 10.4), respectively. We then conducted One-Way ANOVAs for these two factors, number of morae and duration, which revealed that there was no significant main effect of the six topics both with respect to the number of morae and duration [mora:
<italic>F</italic>
<sub>(5, 39)</sub>
= 0.15,
<italic>p</italic>
> 0.97,
<italic>p</italic>
<sub>rep</sub>
= 0.51, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.02; duration:
<italic>F</italic>
<sub>(5, 39)</sub>
= 0.18,
<italic>p</italic>
> 0.96,
<italic>p</italic>
<sub>rep</sub>
= 0.51, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.02]. Furthermore, there was no significant difference in utterance speed across the six topics [
<italic>F</italic>
<sub>(5, 39)</sub>
= 0.33,
<italic>p</italic>
> 0.89,
<italic>p</italic>
<sub>rep</sub>
= 0.54, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.04]. Taken together, these results show that there was neither a positive nor negative effect for any aspect of the utterances with respect to the six different topics.</p>
<p>There was also the possibility that faster utterances occurred at the expense of fluency and intelligibility. We thus conducted an additional experiment in which the fluency and intelligibility of each speech sample were evaluated by naive volunteers other than the participants in the abovementioned main speech experiment. Nine additional participants listened to the 2-min speech recordings made by the participants in the main experiment, and then they evaluated the fluency and intelligibility of the utterances using an 11-step scales (from 0, not fluent/intelligible at all, to 10, very fluent/very intelligible). Results showed that subjective intelligibility did not differ across the three experimental conditions. The obtained values of subjective intelligibility for the three conditions were as follows: fast (
<italic>M</italic>
= 6.31,
<italic>SD</italic>
= 1.27), slow (
<italic>M</italic>
= 6.07,
<italic>SD</italic>
= 1.51), and DRD (
<italic>M</italic>
= 6.21,
<italic>SD</italic>
= 1.12). A One-Way ANOVA revealed no significant main effect for the three conditions [
<italic>F</italic>
<sub>(2, 16)</sub>
= 1.97,
<italic>p</italic>
> 0.17,
<italic>p</italic>
<sub>rep</sub>
= 0.83, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.20]. Conversely, subjective fluency differed across the three conditions; it was the highest (
<italic>M</italic>
= 6.09,
<italic>SD</italic>
= 1.56) in the fast vection condition, not as high (
<italic>M</italic>
= 5.90,
<italic>SD</italic>
= 1.63) in the DRD condition, and the lowest (
<italic>M</italic>
= 5.74,
<italic>SD</italic>
= 1.63) in the slow vection condition. A One-Way ANOVA revealed a significant main effect for the three conditions [
<italic>F</italic>
<sub>(2, 16)</sub>
= 14.48,
<italic>p</italic>
< 0.0003,
<italic>p</italic>
<sub>rep</sub>
= 0.99, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.64]. Multiple comparisons revealed significant differences between the fast and slow [
<italic>t</italic>
<sub>(16)</sub>
= 5.38,
<italic>p</italic>
< 0.0001, Cohen's
<italic>d</italic>
= 3.80], the fast and DRD [
<italic>t</italic>
<sub>(16)</sub>
= 2.92,
<italic>p</italic>
< 0.02, Cohen's
<italic>d</italic>
= 2.05], and the slow and DRD [
<italic>t</italic>
<sub>(16)</sub>
= 2.48,
<italic>p</italic>
< 0.03, Cohen's
<italic>d</italic>
= 1.75] conditions. In both of the evaluations, the fast vection condition did not result in the smallest obtained values, indicating that the observed fast speech was not produced at the expense of fluency and intelligibility. Thus, through our rating method, we successfully showed that fast speech was not produced at the expense of fluency and intelligibility.</p>
<p>Furthermore, to exclude the possibility of degraded fluency and intelligibility using a more objective method of analysis, we calculated the total duration of disruptions in each speech sample. The mean values of disruptions were 20.87 (
<italic>SD</italic>
= 10.86), 25.64 (
<italic>SD</italic>
= 10.65), and 23.30 (
<italic>SD</italic>
= 11.18) seconds for the fast, slow, and DRD conditions, respectively. A One-Way ANOVA revealed a marginally significant main effect of the three conditions in the total disruption duration [
<italic>F</italic>
<sub>(2, 28)</sub>
= 3.33,
<italic>p</italic>
= 0.0504,
<italic>p</italic>
<sub>rep</sub>
= 0.92, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.19]. Although main effect did not reach significance, the
<italic>p</italic>
-value was quite close to the significance level (α = 0.05), and hence we conducted
<italic>post-hoc</italic>
multiple comparisons to reveal further the differences between the conditions. The multiple comparisons (Ryan's method) revealed that there was a significant difference between the fast and slow conditions [
<italic>t</italic>
<sub>(28)</sub>
= 2.58,
<italic>p</italic>
< 0.02, Cohen's
<italic>d</italic>
= 1.38] but that there were not significant differences between the fast and DRD [
<italic>t</italic>
<sub>(28)</sub>
= 1.31,
<italic>p</italic>
> 0.19, Cohen's
<italic>d</italic>
= 0.70] and between the DRD and slow conditions [
<italic>t</italic>
<sub>(28)</sub>
= 1.27,
<italic>p</italic>
> 0.21, Cohen's
<italic>d</italic>
= 0.68] indicating that speakers less paused in the fast condition, although the effect size was relatively small. These results did not contradict our main finding that the utterance speed increased. We speculated that both utterance and disruption are mediated by a unitary mechanism and that the mechanism might be modulated by the fast vection and then both utterance and disruption simultaneously changed.</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>In this study, we attempted to develop a method to modulate utterance speed by visual stimulation that induced vection. To this end, we tested whether vection speed affected utterance speed. Two visual displays—with different optic flow speeds—were used to modulate utterance speed; one display induced fast vection, and the other induced slow vection. We predicted that both of these vection displays would induce faster utterances than a non-vection DRD display, and that the fast vection display would induce faster utterance speed than the slow display. Results partially proved our prediction: while the faster vection significantly accelerated utterance speed, no such effect of the slow vection display was obtained. This result might be related to the fact that the slow vection stimuli were not much different from the randomly generated, non-vection DRD display.</p>
<p>One might argue that some of the topics used as speech prompts for the participants were easier to discuss in the fast optic flow condition than in the slow optic flow condition, thereby resulting in more fluent utterances (i.e., more morae and faster utterances in the fast optic flow condition). However, in our experiment the topics were randomly chosen for each condition for each participant. Thus, if there was a bias in the difficulty of speech related to the topic, then this manipulation counterbalanced any such bias. Therefore, the possibility that some topics induced faster utterances should be negligible in the present experiment. In addition, we proved that there was no positive or negative effect of six different topics in the utterance speed.</p>
<p>The nature of the mechanism underlying our findings poses an intriguing question. One possible explanation is related to cognitive or semantic modulation. Semantic and cognitive representations of “fast” may be consistent across utterance speed and self-motion. This semantic connection might have been activated in the participants' minds during the experiment, yielding the current results. We previously reported that upward vection induced positive memories (Seno et al.,
<xref ref-type="bibr" rid="B19">2013</xref>
) and also that positive sounds enhanced upward vection (Sasaki et al.,
<xref ref-type="bibr" rid="B16">2012</xref>
). Thus, there is evidence that semantic-cognitive consistency, i.e., the connection of semantic representations of “upward” and “positivity,” can modulate both vection and cognition. A similar type of modulation also likely occurred in the present study, i.e., in the modulation of utterance speed. However, this account cannot explain why the present study found the effect of vection only in cases of acceleration.</p>
<p>It is also possible that the acceleration effect we observed is related to arousal level, especially if fast vection stimuli increased the participants' level of arousal. In previous studies of time perception, the notion of an “internal clock” has been proposed as a general pacemaker that governs the temporal aspects of human perception and action (Gibbon et al.,
<xref ref-type="bibr" rid="B5">1984</xref>
; Meck,
<xref ref-type="bibr" rid="B13">2005</xref>
). The neural basis of this internal clock-like time measurement system has been debated (Gibbon et al.,
<xref ref-type="bibr" rid="B6">1997</xref>
; Mauk and Buonomano,
<xref ref-type="bibr" rid="B12">2004</xref>
; Buhusi and Meck,
<xref ref-type="bibr" rid="B2">2005</xref>
; Meck,
<xref ref-type="bibr" rid="B13">2005</xref>
; Lewis and Miall,
<xref ref-type="bibr" rid="B11">2006</xref>
). Furthermore, a number of previous studies have reported the effects of arousal on the internal clock (Droit-Volet and Meck,
<xref ref-type="bibr" rid="B3">2007</xref>
), with increased arousal levels speeding up the internal clock. Our results suggest that this arousal-based speeding up of the internal clock may have modulated mental tempo, causing an increase in the participants' utterance speed. The increased speed of the internal clock was not consciously perceived, and as such it subconsciously altered (accelerated) the tempo of actions (utterances); thus, it is unlikely that participants consciously slowed down their utterance speed based on a conscious awareness of their increased utterance speed. In another study, we also revealed that vection could modulate arousal levels and mental tempo (Ihaya et al., submitted). Therefore, this account may explain why only the acceleration effect was observed, that is, why the arousal levels evoked by the optic flow were comparable in the slow and DRD stimuli. This account is more plausible than the cognitive account discussed above. Moreover, our results also suggest a number of other possibilities. For example, there is the possibility that the fast vection condition caused an increase in mental stress, thus inducing faster utterances. Future work should examine these possibilities in more detail and propose additional valid explanations.</p>
<p>The acceleration effect observed in the present study raises a number of interesting questions that should be topics of future research. For example, how long do these effects last? Are there marked individual differences? What are the maximal stimulus speed and exposure durations required to generate maximum effects? Understanding these points should lead to the development of not only instant adjustment techniques, but also techniques for the learning or preadjustment of utterance speed; that is, if a speaker is worried about his or her utterance speed, then such techniques would serve to increase the speed of all the processes involved before the actual speaking situations due to the long-lasting acceleration effect.</p>
<p>Other avenues for future research include the examination of whether other types of illusory self-motion (such as auditory vection and vestibularly-simulated self-motion) can also modulate utterance speed. If this turns out to be the case, then our proposed method will be applicable to even the visually impaired. Thus, as the first step to develop a technique for improving utterance speed, the present study would also be a steppingstone to further establish effective and easy-to-use techniques required for the practical treatment of a variety of clinical problems related to slow uttering speeds.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The first and the third authors are aided by Japan Society for Promotion of Science. The first author is supported by Funds for the Development of Human Resources in Science and Technology (Japan Science and Technology Agency). This work is supported by Program to Disseminate Tenure Tracking System, Ministry of Education, Culture, Sports, Science and Technology, Japan. The other author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank Kyoshiro Sasaki for his extensive efforts in analyzing all of the speech sound data. We also thank Masaki Ogawa for his help in analyzing data. The first and third authors were aided by the Japan Society for Promotion of Science. The first author is supported by Funds for the Development of Human Resources in Science and Technology (Japan Science and Technology Agency). This work is supported by Program to Disseminate Tenure Tracking System, Ministry of Education, Culture, Sports, Science and Technology, Japan.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ash</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Palmisano</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Vection in depth during consistent and inconsistent multisensory stimulation</article-title>
.
<source>Perception</source>
<volume>40</volume>
,
<fpage>155</fpage>
<lpage>174</lpage>
<pub-id pub-id-type="doi">10.1068/p6837</pub-id>
<pub-id pub-id-type="pmid">21650090</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buhusi</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Meck</surname>
<given-names>W. H.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>What makes us tick? Functional and neural mechanisms of interval timing</article-title>
.
<source>Nat. Rev. Neurosci</source>
.
<volume>6</volume>
,
<fpage>755</fpage>
<lpage>765</lpage>
<pub-id pub-id-type="doi">10.1038/nrn1764</pub-id>
<pub-id pub-id-type="pmid">16163383</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Droit-Volet</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Meck</surname>
<given-names>W. H.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>How emotions colour our perception of time</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>11</volume>
,
<fpage>504</fpage>
<lpage>513</lpage>
<pub-id pub-id-type="pmid">18023604</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fischer</surname>
<given-names>M. H.</given-names>
</name>
<name>
<surname>Kornmüller</surname>
<given-names>A. E.</given-names>
</name>
</person-group>
(
<year>1930</year>
).
<article-title>Optokinetisch ausgelöste Bewegungswahrnehmungen und optokinetischer Nystagmus</article-title>
.
<source>J. Psychol. Neurol</source>
.
<volume>41</volume>
,
<fpage>273</fpage>
<lpage>308</lpage>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibbon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Church</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Meck</surname>
<given-names>W. H.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Scalar timing in memory</article-title>
.
<source>Ann. N.Y. Acad. Sci</source>
.
<volume>423</volume>
,
<fpage>52</fpage>
<lpage>77</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.1984.tb23417.x</pub-id>
<pub-id pub-id-type="pmid">6588812</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibbon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Malapani</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Dale</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Gallistel</surname>
<given-names>C. R.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Toward a neurobiology of temporal cognition: advances and challenges</article-title>
.
<source>Curr. Opin. Neurobiol</source>
.
<volume>7</volume>
,
<fpage>170</fpage>
<lpage>184</lpage>
<pub-id pub-id-type="doi">10.1016/S0959-438880005-0</pub-id>
<pub-id pub-id-type="pmid">9142762</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helm</surname>
<given-names>N. A.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>Management of palilalia with a pacing board</article-title>
.
<source>J. Speech Hear. Disord</source>
.
<volume>44</volume>
,
<fpage>350</fpage>
<lpage>353</lpage>
<pub-id pub-id-type="pmid">480939</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lécuyer</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vidal</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Joly</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Megard</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Berthoz</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Can haptic feedback improve the perception of self-motion in virtual reality?</article-title>
, in
<source>Haptic Interfaces for Virtual Environment and Teleoperator Systems Haptics'04 Proceedings, 12th International Symposium</source>
, (
<publisher-loc>Chicago, IL</publisher-loc>
),
<fpage>208</fpage>
<lpage>215</lpage>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>B. S.</given-names>
</name>
</person-group>
(
<year>1950</year>
).
<article-title>Some effects of side-tone delay</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>22</volume>
,
<fpage>639</fpage>
<lpage>640</lpage>
<pub-id pub-id-type="doi">10.1121/1.1906665</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Miall</surname>
<given-names>R. C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Remembering the time: a continuous clock</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>10</volume>
,
<fpage>401</fpage>
<lpage>406</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2006.07.006</pub-id>
<pub-id pub-id-type="pmid">16899395</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mauk</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Buonomano</surname>
<given-names>D. V.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The neural basis of temporal processing</article-title>
.
<source>Annu. Rev. Neurosci</source>
.
<volume>27</volume>
,
<fpage>307</fpage>
<lpage>340</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.neuro.27.070203.144247</pub-id>
<pub-id pub-id-type="pmid">15217335</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meck</surname>
<given-names>W. H.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Neuropsychology of timing and time perception</article-title>
.
<source>Brain Cogn</source>
.
<volume>58</volume>
,
<fpage>1</fpage>
<lpage>8</lpage>
<pub-id pub-id-type="pmid">15878722</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Maruyama</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Beaber</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Valone</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Speed of speech and persuasion</article-title>
.
<source>J. Pers. Soc. Psychol</source>
.
<volume>34</volume>
,
<fpage>615</fpage>
<lpage>624</lpage>
<pub-id pub-id-type="doi">10.1037/0022-3514.34.4.615</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Riecke</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Feuereissen</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Rieser</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Auditory self-motion illusions (“circular vection”) can be facilitated by vibrations and the potential for actual motion, ACM</article-title>
, in
<source>APGV 2008 Conference Proceedings</source>
. (
<publisher-loc>Los Angeles, CA</publisher-loc>
),
<fpage>147</fpage>
<lpage>154</lpage>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sasaki</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Seno</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Yamada</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Miura</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Emotional sounds influence vertical vection</article-title>
.
<source>Perception</source>
<volume>41</volume>
,
<fpage>875</fpage>
<lpage>877</lpage>
<pub-id pub-id-type="doi">10.1068/p7215</pub-id>
<pub-id pub-id-type="pmid">23155739</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seno</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ito</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sunaga</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Inconsistent locomotion inhibits vection</article-title>
.
<source>Perception</source>
<volume>40</volume>
,
<fpage>747</fpage>
<lpage>750</lpage>
<pub-id pub-id-type="doi">10.1068/p7018</pub-id>
<pub-id pub-id-type="pmid">21936303</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seno</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ogawa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ito</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sunaga</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Consistent air flow to the face facilitates vection</article-title>
.
<source>Perception</source>
<volume>40</volume>
,
<fpage>1237</fpage>
<lpage>1240</lpage>
<pub-id pub-id-type="doi">10.1068/p7055</pub-id>
<pub-id pub-id-type="pmid">22308892</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seno</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kawabe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ito</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sunaga</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Vection modulates emotional valence of autobiographical episodic memories</article-title>
.
<source>Cognition</source>
<volume>126</volume>
,
<fpage>115</fpage>
<lpage>120</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2012.08.009</pub-id>
<pub-id pub-id-type="pmid">23063264</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>B. L.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>B. L.</given-names>
</name>
<name>
<surname>Strong</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Rencher</surname>
<given-names>A. C.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>Effects of speech rate on personality perception</article-title>
.
<source>Lang. Speech</source>
<volume>18</volume>
,
<fpage>145</fpage>
<lpage>152</lpage>
<pub-id pub-id-type="pmid">1195957</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watanabe</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Behavioral speed contagion: automatic modulation of movement timing by observation of body movements</article-title>
.
<source>Cognition</source>
<volume>106</volume>
,
<fpage>1514</fpage>
<lpage>1524</lpage>
<pub-id pub-id-type="pmid">17612518</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>W. G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Linear vection in virtual environments can be strengthened by discordant inertial input</article-title>
, in
<source>31st Annual International Conference of the IEEE (EMBS)</source>
. (
<publisher-loc>Minneapolis</publisher-loc>
),
<fpage>1157</fpage>
<lpage>1160</lpage>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yates</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>1963</year>
).
<article-title>Delayed auditory feedback</article-title>
<source>Psychol. Bull</source>
.
<volume>60</volume>
,
<fpage>213</fpage>
<lpage>232</lpage>
<pub-id pub-id-type="doi">10.1037/h0044155</pub-id>
<pub-id pub-id-type="pmid">14002534</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Japon</li>
</country>
</list>
<tree>
<country name="Japon">
<noRegion>
<name sortKey="Seno, Takeharu" sort="Seno, Takeharu" uniqKey="Seno T" first="Takeharu" last="Seno">Takeharu Seno</name>
</noRegion>
<name sortKey="Ihaya, Keiko" sort="Ihaya, Keiko" uniqKey="Ihaya K" first="Keiko" last="Ihaya">Keiko Ihaya</name>
<name sortKey="Seno, Takeharu" sort="Seno, Takeharu" uniqKey="Seno T" first="Takeharu" last="Seno">Takeharu Seno</name>
<name sortKey="Seno, Takeharu" sort="Seno, Takeharu" uniqKey="Seno T" first="Takeharu" last="Seno">Takeharu Seno</name>
<name sortKey="Yamada, Yuki" sort="Yamada, Yuki" uniqKey="Yamada Y" first="Yuki" last="Yamada">Yuki Yamada</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002876 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002876 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3738860
   |texte=   I speak fast when I move fast: the speed of illusory self-motion (vection) modulates the speed of utterances
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:23950749" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024