Serveur d'exploration sur la musique celtique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

With diversity in mind: Freeing the language sciences from Universal Grammar

Identifieur interne : 001B16 ( Istex/Corpus ); précédent : 001B15; suivant : 001B17

With diversity in mind: Freeing the language sciences from Universal Grammar

Auteurs : Nicholas Evans ; Stephen C. Levinson

Source :

RBID : ISTEX:2B00B7FA5F889F68E543BBA70D94C693F32021E4

Abstract

Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.

Url:
DOI: 10.1017/S0140525X09990525

Links to Exploration step

ISTEX:2B00B7FA5F889F68E543BBA70D94C693F32021E4

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title>With diversity in mind: Freeing the language sciences from Universal Grammar</title>
<author>
<name sortKey="Evans, Nicholas" sort="Evans, Nicholas" uniqKey="Evans N" first="Nicholas" last="Evans">Nicholas Evans</name>
<affiliation>
<mods:affiliation>Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia nicholas.evans@anu.edu.au http://rspas.anu.edu.au/people/personal/evann_ling.php</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: nicholas.evans@anu.edu.au</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Levinson, Stephen C" sort="Levinson, Stephen C" uniqKey="Levinson S" first="Stephen C." last="Levinson">Stephen C. Levinson</name>
<affiliation>
<mods:affiliation>Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, The Netherlands. stephen.levinson@mpi.nl http://www.mpi.nl/Members/StephenLevinson</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: stephen.levinson@mpi.nl</mods:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:2B00B7FA5F889F68E543BBA70D94C693F32021E4</idno>
<date when="2009" year="2009">2009</date>
<idno type="doi">10.1017/S0140525X09990525</idno>
<idno type="url">https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001B16</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">001B16</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a">With diversity in mind: Freeing the language sciences from Universal Grammar</title>
<author>
<name sortKey="Evans, Nicholas" sort="Evans, Nicholas" uniqKey="Evans N" first="Nicholas" last="Evans">Nicholas Evans</name>
<affiliation>
<mods:affiliation>Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia nicholas.evans@anu.edu.au http://rspas.anu.edu.au/people/personal/evann_ling.php</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: nicholas.evans@anu.edu.au</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Levinson, Stephen C" sort="Levinson, Stephen C" uniqKey="Levinson S" first="Stephen C." last="Levinson">Stephen C. Levinson</name>
<affiliation>
<mods:affiliation>Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, The Netherlands. stephen.levinson@mpi.nl http://www.mpi.nl/Members/StephenLevinson</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: stephen.levinson@mpi.nl</mods:affiliation>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Behavioral and Brain Sciences</title>
<title level="j" type="abbrev">Behav Brain Sci</title>
<idno type="ISSN">0140-525X</idno>
<idno type="eISSN">1469-1825</idno>
<imprint>
<publisher>Cambridge University Press</publisher>
<pubPlace>New York, USA</pubPlace>
<date type="published" when="2009-10">2009-10</date>
<biblScope unit="volume">32</biblScope>
<biblScope unit="issue">5</biblScope>
<biblScope unit="page" from="472">472</biblScope>
<biblScope unit="page" to="492">492</biblScope>
</imprint>
<idno type="ISSN">0140-525X</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0140-525X</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract">Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.</div>
</front>
</TEI>
<istex>
<corpusName>cambridge</corpusName>
<author>
<json:item>
<name>Nicholas Evans</name>
<affiliations>
<json:string>Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia nicholas.evans@anu.edu.au http://rspas.anu.edu.au/people/personal/evann_ling.php</json:string>
<json:string>E-mail: nicholas.evans@anu.edu.au</json:string>
</affiliations>
</json:item>
<json:item>
<name>Stephen C. Levinson</name>
<affiliations>
<json:string>Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, The Netherlands. stephen.levinson@mpi.nl http://www.mpi.nl/Members/StephenLevinson</json:string>
<json:string>E-mail: stephen.levinson@mpi.nl</json:string>
</affiliations>
</json:item>
</author>
<articleId>
<json:string>99052</json:string>
</articleId>
<arkIstex>ark:/67375/6GQ-458MCHXL-T</arkIstex>
<language>
<json:string>eng</json:string>
</language>
<originalGenre>
<json:string>research-article</json:string>
</originalGenre>
<abstract>Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.</abstract>
<qualityIndicators>
<score>9.196</score>
<pdfWordCount>63951</pdfWordCount>
<pdfCharCount>400946</pdfCharCount>
<pdfVersion>1.3</pdfVersion>
<pdfPageCount>64</pdfPageCount>
<pdfPageSize>602.986 x 792 pts</pdfPageSize>
<refBibsNative>true</refBibsNative>
<abstractWordCount>183</abstractWordCount>
<abstractCharCount>1385</abstractCharCount>
<keywordCount>0</keywordCount>
</qualityIndicators>
<title>With diversity in mind: Freeing the language sciences from Universal Grammar</title>
<pii>
<json:string>S0140525X09990525</json:string>
</pii>
<genre>
<json:string>research-article</json:string>
</genre>
<host>
<title>Behavioral and Brain Sciences</title>
<language>
<json:string>unknown</json:string>
</language>
<issn>
<json:string>0140-525X</json:string>
</issn>
<eissn>
<json:string>1469-1825</json:string>
</eissn>
<publisherId>
<json:string>BBS</json:string>
</publisherId>
<volume>32</volume>
<issue>5</issue>
<pages>
<first>472</first>
<last>492</last>
<total>23</total>
</pages>
<genre>
<json:string>journal</json:string>
</genre>
</host>
<ark>
<json:string>ark:/67375/6GQ-458MCHXL-T</json:string>
</ark>
<categories>
<wos>
<json:string>1 - social science</json:string>
<json:string>2 - psychology, biological</json:string>
<json:string>1 - science</json:string>
<json:string>2 - neurosciences</json:string>
<json:string>2 - behavioral sciences</json:string>
</wos>
<scienceMetrix>
<json:string>1 - health sciences</json:string>
<json:string>2 - psychology & cognitive sciences</json:string>
<json:string>3 - experimental psychology</json:string>
</scienceMetrix>
<inist>
<json:string>1 - sciences humaines et sociales</json:string>
</inist>
<scopus>
<json:string>1 - Life Sciences</json:string>
<json:string>2 - Neuroscience</json:string>
<json:string>3 - Behavioral Neuroscience</json:string>
<json:string>1 - Life Sciences</json:string>
<json:string>2 - Biochemistry, Genetics and Molecular Biology</json:string>
<json:string>3 - Physiology</json:string>
<json:string>1 - Social Sciences</json:string>
<json:string>2 - Psychology</json:string>
<json:string>3 - Neuropsychology and Physiological Psychology</json:string>
</scopus>
</categories>
<publicationDate>2009</publicationDate>
<copyrightDate>2009</copyrightDate>
<doi>
<json:string>10.1017/S0140525X09990525</json:string>
</doi>
<id>2B00B7FA5F889F68E543BBA70D94C693F32021E4</id>
<score>1</score>
<fulltext>
<json:item>
<extension>pdf</extension>
<original>true</original>
<mimetype>application/pdf</mimetype>
<uri>https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/fulltext/pdf</uri>
</json:item>
<json:item>
<extension>zip</extension>
<original>false</original>
<mimetype>application/zip</mimetype>
<uri>https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/fulltext/zip</uri>
</json:item>
<json:item>
<extension>txt</extension>
<original>false</original>
<mimetype>text/plain</mimetype>
<uri>https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/fulltext/txt</uri>
</json:item>
<istex:fulltextTEI uri="https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/fulltext/tei">
<teiHeader>
<fileDesc>
<titleStmt>
<title level="a">With diversity in mind: Freeing the language sciences from Universal Grammar</title>
</titleStmt>
<publicationStmt>
<authority>ISTEX</authority>
<publisher scheme="https://publisher-list.data.istex.fr">Cambridge University Press</publisher>
<pubPlace>New York, USA</pubPlace>
<availability>
<licence>
<p>Copyright © Cambridge University Press 2009</p>
</licence>
<p scheme="https://loaded-corpus.data.istex.fr/ark:/67375/XBH-G3RCRD03-V">cambridge</p>
</availability>
<date>2009</date>
</publicationStmt>
<notesStmt>
<note type="research-article" scheme="https://content-type.data.istex.fr/ark:/67375/XTP-1JC4F85T-7">research-article</note>
<note type="journal" scheme="https://publication-type.data.istex.fr/ark:/67375/JMC-0GLKJH51-B">journal</note>
</notesStmt>
<sourceDesc>
<biblStruct type="inbook">
<analytic>
<title level="a">With diversity in mind: Freeing the language sciences from Universal Grammar</title>
<author xml:id="author-0000">
<persName>
<forename type="first">Nicholas</forename>
<surname>Evans</surname>
</persName>
<email>nicholas.evans@anu.edu.au</email>
<affiliation>Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia nicholas.evans@anu.edu.au http://rspas.anu.edu.au/people/personal/evann_ling.php</affiliation>
</author>
<author xml:id="author-0001">
<persName>
<forename type="first">Stephen C.</forename>
<surname>Levinson</surname>
</persName>
<email>stephen.levinson@mpi.nl</email>
<affiliation>Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, The Netherlands. stephen.levinson@mpi.nl http://www.mpi.nl/Members/StephenLevinson</affiliation>
</author>
<idno type="istex">2B00B7FA5F889F68E543BBA70D94C693F32021E4</idno>
<idno type="ark">ark:/67375/6GQ-458MCHXL-T</idno>
<idno type="DOI">10.1017/S0140525X09990525</idno>
<idno type="PII">S0140525X09990525</idno>
<idno type="article-id">99052</idno>
<idno type="related-article-ID">S0140525X0999094X</idno>
</analytic>
<monogr>
<title level="j">Behavioral and Brain Sciences</title>
<title level="j" type="abbrev">Behav Brain Sci</title>
<idno type="pISSN">0140-525X</idno>
<idno type="eISSN">1469-1825</idno>
<idno type="publisher-id">BBS</idno>
<imprint>
<publisher>Cambridge University Press</publisher>
<pubPlace>New York, USA</pubPlace>
<date type="published" when="2009-10"></date>
<biblScope unit="volume">32</biblScope>
<biblScope unit="issue">5</biblScope>
<biblScope unit="page" from="472">472</biblScope>
<biblScope unit="page" to="492">492</biblScope>
</imprint>
</monogr>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<creation>
<date>2009</date>
</creation>
<langUsage>
<language ident="en">en</language>
</langUsage>
<abstract style="normal">
<p>Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.</p>
</abstract>
</profileDesc>
<revisionDesc>
<change when="2009-10">Published</change>
</revisionDesc>
</teiHeader>
</istex:fulltextTEI>
</fulltext>
<metadata>
<istex:metadataXml wicri:clean="corpus cambridge not found" wicri:toSee="no header">
<istex:xmlDeclaration>version="1.0" encoding="US-ASCII"</istex:xmlDeclaration>
<istex:docType PUBLIC="-//NLM//DTD Journal Publishing DTD v2.2 20060430//EN" URI="journalpublishing.dtd" name="istex:docType"></istex:docType>
<istex:document>
<article dtd-version="2.2" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">BBS</journal-id>
<journal-title>Behavioral and Brain Sciences</journal-title>
<abbrev-journal-title>Behav Brain Sci</abbrev-journal-title>
<issn pub-type="ppub">0140-525X</issn>
<issn pub-type="epub">1469-1825</issn>
<publisher>
<publisher-name>Cambridge University Press</publisher-name>
<publisher-loc>New York, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.1017/S0140525X09990525</article-id>
<article-id pub-id-type="pii">S0140525X09990525</article-id>
<article-id pub-id-type="publisher-id">99052</article-id>
<title-group>
<article-title>With diversity in mind: Freeing the language sciences from Universal Grammar</article-title>
<alt-title alt-title-type="left-running">
<italic>Response</italic>
/Evans & Levinson: The myth of language universals</alt-title>
<alt-title alt-title-type="right-running">
<italic>Response</italic>
/Evans & Levinson: The myth of language universals</alt-title>
</title-group>
<contrib-group>
<contrib>
<name>
<surname>Evans</surname>
<given-names>Nicholas</given-names>
</name>
<xref ref-type="aff" rid="aff1">a</xref>
</contrib>
<contrib>
<name>
<surname>Levinson</surname>
<given-names>Stephen C.</given-names>
</name>
<xref ref-type="aff" rid="aff2">b</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>a</sup>
</label>
<addr-line>Department of Linguistics</addr-line>
,
<institution>Research School of Asian and Pacific Studies, Australian National University</institution>
,
<addr-line>ACT 0200</addr-line>
,
<country>Australia</country>
<email xlink:href="nicholas.evans@anu.edu.au">nicholas.evans@anu.edu.au</email>
<uri xlink:href="http://rspas.anu.edu.au/people/personal/evann_ling.php">http://rspas.anu.edu.au/people/personal/evann_ling.php</uri>
</aff>
<aff id="aff2">
<label>
<sup>b</sup>
</label>
<institution>Max Planck Institute for Psycholinguistics</institution>
,
<addr-line>Wundtlaan 1, NL-6525 XD Nijmegen</addr-line>
,
<country>The Netherlands</country>
; and
<institution>Radboud University</institution>
,
<country>The Netherlands</country>
.
<email xlink:href="stephen.levinson@mpi.nl">stephen.levinson@mpi.nl</email>
<uri xlink:href="http://www.mpi.nl/Members/StephenLevinson">http://www.mpi.nl/Members/StephenLevinson</uri>
</aff>
<pub-date pub-type="ppub">
<month>10</month>
<year>2009</year>
</pub-date>
<volume>32</volume>
<issue>5</issue>
<fpage seq="47">472</fpage>
<lpage>492</lpage>
<permissions>
<copyright-statement>Copyright © Cambridge University Press 2009</copyright-statement>
<copyright-year>2009</copyright-year>
<copyright-holder>Cambridge University Press</copyright-holder>
</permissions>
<related-article related-article-type="commentary-article" journal-id="BBS" journal-id-type="publisher-id" vol="32" issue="5" id="S0140525X0999094X" page="429">
<article-title>The myth of language universals: Language diversity and its importance for cognitive science</article-title>
<name>
<surname>Evans</surname>
<given-names>Nicholas</given-names>
</name>
and
<name>
<surname>Levinson</surname>
<given-names>Stephen C.</given-names>
</name>
Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia.
<email xlink:href="nicholas.evans@anu.edu.au">nicholas.evans@anu.edu.au</email>
<uri xlink:href="http://rspas.anu.edu.au/people/personal/evann_ling.php">http://rspas.anu.edu.au/people/personal/evann_ling.php</uri>
; Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, Department of Linguistics, Nijmegen, The Netherlands.
<email xlink:href="stephen.levinson@mpi.nl">stephen.levinson@mpi.nl</email>
<uri xlink:href="http://www.mpi.nl/Members/StephenLevinson">http://www.mpi.nl/Members/StephenLevinson</uri>
</related-article>
<abstract abstract-type="normal">
<title>Abstract</title>
<p>Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.</p>
</abstract>
<counts>
<page-count count="23"></page-count>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>pdf</meta-name>
<meta-value>S0140525X09990525a.pdf</meta-value>
</custom-meta>
<custom-meta>
<meta-name>dispart</meta-name>
<meta-value>Authors' Response</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<label>R1.</label>
<title>Introduction</title>
<p>The purpose of our target article was to draw attention to linguistic diversity and its implications for theories of human cognition: Structural diversity at every level is not consonant with a theory of fixed innate language structure, but instead suggests remarkable cognitive plasticity and powerful learning mechanisms. We pointed out that human communication is the
<italic>only</italic>
animal communication system that varies in myriad ways in both form and meaning across the species, and this must be a central fact that should never be lost sight of.</p>
<p>The responses in the commentaries show that opinion in the language sciences, and especially in linguistics, is still sharply divided on “the myth of language universals,” or at least our telling of it. The comments of the typological and functional linguists (
<bold>Croft</bold>
,
<bold>Goldberg</bold>
,
<bold>Haspelmath</bold>
) show that much of our argument is already widely accepted there: “evolutionary linguistics is already here” (Croft). Positive responses from many commentators in experimental and cross-species comparative psychology suggest that researchers in experimental psychology and cross-species studies of communication are ready for the kind of coevolutionary, variability-centred approach we outlined (
<bold>Bavin</bold>
,
<bold>Catania</bold>
,
<bold>McMurray & Wasserman</bold>
,
<bold>Merker</bold>
,
<bold>Tomasello</bold>
, and
<bold>Waterfall & Edelman</bold>
). Generative linguists, by contrast, disagreed sharply with our presentation, laying bare some fundamental differences in how linguistics is conceived as a science.
<xref ref-type="fn" rid="en01">
<sup>1</sup>
</xref>
</p>
<p>We have organized the response as follows.
<list list-type="simple">
<list-item>
<p>Section R2 responds to the critical comments from the generative camp, suggesting that the assumptions behind many of these responses are misplaced.</p>
</list-item>
<list-item>
<p>Section R3 looks at the question of whether we have overstated the range of diversity by ignoring unpopulated regions of the design space.</p>
</list-item>
<list-item>
<p>Section R4 takes the commentaries from the non-generative linguists and the psychological, animal behavior, and computational learning research communities, which were overwhelmingly positive, and indicates how these might be used to round out, or in places correct, our position.</p>
</list-item>
<list-item>
<p>Section R5 sketches where we think these new developments are heading, and their relationship to what else is happening in the cognitive sciences.</p>
</list-item>
</list>
</p>
<p>We set aside the specific data questions till an appendix at the end, where we concede two factual mistakes, clarify disputed facts and generalizations, and examine more specific linguistic points that would bog down the main argument – on nearly all of them, we think the criticisms from our generativist colleagues do not stand up to scrutiny.</p>
</sec>
<sec id="sec2">
<label>R2.</label>
<title>Incompatible evaluation metrics reflect different paradigms</title>
<p>It was never our intention to engage in mud-slinging with our generative colleagues, but as
<bold>Tomasello</bold>
has predicted there was a certain inevitability that familiar sibling quarrels would be rerun. Most of the criticisms from the generative camp reflect deep differences between generative and typological/functionalist approaches in their overall assumptions about many issues. Where do we locate causal explanations? Where do we seek the general unifying laws behind surface diversity – in structure or in process? Do we use only discrete mathematical models (favoring regularized representations), or do we bring in continuous and stochastic models as well (favoring representations sticking closer to surface variety)? Should generalizations purport to directly represent mental reality, or are they modelling the summed information of thousands of different coevolutionary products shaped by multiple selective factors? Should we adopt essentializing categorizations (as “formal universals”), or abandon these as misleading and adopt a strategy that measures surface diversity directly so as not to lose data that is useful for evaluating the fit of models?</p>
<p>Generative and typological/functionalist accounts will give different answers to each of these questions, and it is this difference in overall scientific paradigm that accounts for the seemingly irreconcilable conflict between generativist commentators like
<bold>Freidin</bold>
and
<bold>Pesetsky</bold>
, who see our proposals as so imprecise as to be unfalsifiable, and psychologists like
<bold>Tomasello</bold>
and
<bold>Margoliash & Nusbaum</bold>
, for whom it is the generative approach that has moved away from falsifiability.</p>
<p>To clarify these differences, we try here to give a brief and constructive account of where the real differences lie (as
<bold>Pullum & Scholz</bold>
opine, more could be fruitless). The generativist critique includes the following interlinked charges:
<list list-type="number">
<list-item>
<label>1.</label>
<p>Lack of theory, precise representation, or falsifiability (
<bold>Smolensky & Dupoux</bold>
,
<bold>Freidin</bold>
)</p>
</list-item>
<list-item>
<label>2.</label>
<p>Mistaken ontology, mistaking behavior for cognition and (a point we hold off till sect. R4.1) history for science (
<bold>Smolensky & Dupoux</bold>
)</p>
</list-item>
<list-item>
<label>3.</label>
<p>Lack of abstractness – that we are misled by surface variation into ignoring underlying structural regularities (
<bold>Baker</bold>
,
<bold>Harbour</bold>
)</p>
</list-item>
<list-item>
<label>4.</label>
<p>That taking surface diversity at face value leads away from the quest for general principles (
<bold>Smolensky & Dupoux</bold>
,
<bold>Nevins</bold>
)</p>
</list-item>
<list-item>
<label>5.</label>
<p>That we have neglected the presence of “formal universals” (
<bold>Nevins</bold>
)</p>
</list-item>
<list-item>
<label>6.</label>
<p>That the typologists' preference for using a non-abstract descriptive apparatus is the wrong methodological choice (
<bold>Rizzi</bold>
)</p>
</list-item>
<list-item>
<label>7.</label>
<p>That we have merely presented an under-analyzed
<italic>Wunderkammer</italic>
of variation that can be shown to reduce to well-known phenomena (
<bold>Pesetsky</bold>
).</p>
</list-item>
</list>
</p>
<p>We now take up these issues one at a time. A further criticism, that we may have overstated the range of diversity by ignoring the fact that languages all lie within a bounded corner of the possibility space (
<bold>Pinker & Jackendoff</bold>
,
<bold>Tallerman</bold>
) is dealt with separately in section R3.</p>
<sec id="sec2-1">
<label>R.2.1.</label>
<title>What kind of theory?</title>
<p>
<bold>Smolensky & Dupoux</bold>
and
<bold>Freidin</bold>
complain that we did not offer a fully articulated theory with precise predictions about sentential structure. But that was not what we set out to do. Our goal was to survey our current understanding of language variation, explain its import for the cognitive sciences, and outline a fertile area for future research. We sketched the kind of biological models into which this variation neatly seems to fit and the ones that invite future development in a number of directions. A lot of these materials and ideas have not been sufficiently taken into account, we felt, by researchers in the cognitive sciences. We were gently suggesting that the time has come for a paradigm change, and at the end of this response we will say a little more.</p>
<p>Nevertheless, at the end of the target article we did sketch directions for future research (see also sect. R5 in this response). Commentators outside the generative camp (e.g.,
<bold>Waterfall & Edelman</bold>
,
<bold>Christiansen & Chater</bold>
) in many cases saw no difficulty in deriving a range of predictions or consequences, and indeed saw the target article as “mov[ing] the study of language in the direction of the research methods of the experimental sciences and away from enclosed personal belief systems” (
<bold>Margoliash & Nusbaum</bold>
).</p>
<p>The radically different assessments of empirical grounding here reflect (we think) a narrow view of theory and evidence on the part of some of our critics. Within the language sciences there is a wide variety of theory – theories about language change (historical linguistics), language usage (pragmatics), microvariation within individual languages (sociolinguistics), language production, acquisition and comprehension (psycholinguistics), language variation (typology), and language structure (the traditional heart of linguistics), to name just a few. Generative theory is just one version of a theory of linguistic structure and representation, and it is marked by a lack of external explanatory variables, making no reference to function, use, or psychological or neural implementation. It has delivered important insights into linguistic complexity, but has now run into severely diminishing returns. It is time to look at the larger context and develop theories that are more responsive to “external” constraints, be they anatomical and neural, cognitive, functional, cultural, or historical. Here we think an evolutionary framework has a great deal to offer in the multiple directions we sketched.</p>
<p>We pointed out the central fact that the human communication system is characterized by a diversity in form and meaning that has no parallel in the animal kingdom. Generative theory has never come fully to terms with this, and a theory of universal grammar that isn't answerable to linguistic variation consequently has distinctly limited appeal.</p>
</sec>
<sec id="sec2-2">
<label>R.2.2.</label>
<title>Cognition, behavior, and representation</title>
<p>Various Chomskyan dichotomies (competence vs. performance, i-language vs. e-language,
<bold>Smolensky & Dupoux</bold>
's cog-universals vs. des-universals) have been used to drive a wedge between cognition and behavior. There are distinct dangers in this.</p>
<p>First, most cognitive scientists will agree that cognition exists to service perception and behavior. Second, the evidence for cognition remains behavioral and perceptual (even when we can look at the functioning brain in vivo, we look at its response to an event), and most cognitive scientists will want all theories measured ultimately in terms of predictions over brain events and behavior or response (as the very title of this journal suggests; cf.
<bold>Margoliash & Nusbaum</bold>
). Third, many cognitive scientists view favorably the new “embodiment” perspectives which blur the line between representation and process.</p>
<p>Chomsky, in his initial work on formal grammars, suggested that the descriptive apparatus chosen to model language should be just sufficient and not more powerful than is required – in that way, some match to cognition may be achieved. From then on, in the generative tradition there has been a systematic conflation between the language of description and what is attributed to the language learner and user: “the brains of all speakers represent a shared set of grammatical categories” (
<bold>Berent</bold>
), and “formal universals in phonology are constituted by the analytic elements that human minds employ in constructing representations of sound structure” (
<bold>Nevins</bold>
).</p>
<p>Many generativist approaches – particularly parametric formulations – consequently attribute cognitive reality to conditionals of the form “if structural decision X, then also structural decision Y” or “learning X is universally easier than learning Y” (essentially
<bold>Nevins</bold>
' Example [1]). No language typologist would maintain that conditional regularities of this type would be found in speakers' heads. Yet this is precisely what is advocated in the OT (Optimality Theory) framework advocated by
<bold>Smolensky & Dupoux</bold>
:
<disp-quote>
<p>OT … is inherently typological: the grammar of one language inevitably incorporates claims about the grammars of all languages. This joining of the individual and the universal, which OT accomplishes through ranking permutation, is probably the most important insight of the theory. (McCarthy
<xref ref-type="bibr" rid="ref42">2002</xref>
, p. 1)</p>
</disp-quote>
</p>
<p>To make this work, an infinite set of possible sentences are first generated then filtered by (among other things) comparisons of this type. Instead of putting the filtering where it belongs, in cultural system evolution across generations, OT effectively burdens each individual mind with a précis of the functional history of all known human languages, and loads the entire optimization process onto on-line grammatical computation. This is not just cognitively unrealistic – it is computationally intractable (Idsardi
<xref ref-type="bibr" rid="ref32">2006</xref>
).</p>
<p>This conflation of the metalanguage with the object of description is a peculiar trick of the generative tradition. By going down this path, it has opened up a huge gap between theory and the behavioral data that would verify it. The complex representational structures look undermotivated, and covert processes proliferate where alternative models deftly avoid them (see the discussion of Subjacency and covert movement in sect. R6.8).</p>
<p>A biologist does not assume that a snail maintains an internalized representation of the mathematical equations that describe the helical growth of its shell. Even for the internal characterization of a mental faculty, the strategy is odd: computer scientists interested in characterizing the properties of programming languages use a more general auxiliary language to describe them, as in Scott-Strachey denotational semantics. Once explanatory theories hook external factors (e.g., psycholinguistic or evolutionary factors) into the account, this conflation of cognition and metalanguage must be dropped.</p>
<p>
<bold>Smolensky & Dupoux</bold>
's aphorism “Generative grammar merits a central place in cognitive science because its topic is cognition and its method is science,” then, will not find universal approval: other branches of linguistics are much more in tune with psychological reality as reflected in language acquisition, production, and comprehension. Nor has generative grammar of the Chomskyan variety been particularly successful as an explicit theory of linguistic representation. Many other representational formats, such as HPSG and LFG, have had greater uptake in computational circles (see, e.g., Butt et al.
<xref ref-type="bibr" rid="ref9">2006</xref>
, Reuer
<xref ref-type="bibr" rid="ref49">2004</xref>
). LFG, for example, adopts a parallel constraint-based architecture that includes dependency as well as constituency relations. This allows for the direct representation of crucial types of variability discussed in sect. 5 of our target article, while avoiding the need for movement rules or large numbers of empty nodes (see sect. R6.8 for further discussion of how this works for subjacency). These formats, which represent different outgrowths from the same generative roots, show that precise, testable, computationally tractable models of language can be developed that reflect cross-linguistic diversity much more directly in their architecture.</p>
</sec>
<sec id="sec2-3">
<label>R.2.3.</label>
<title>Abstractness and universal generalizations</title>
<p>A number of commentators (
<bold>Baker</bold>
,
<bold>Harbour</bold>
,
<bold>Nevins</bold>
,
<bold>Pesetsky</bold>
) felt that we were unwilling to entertain the sorts of abstract analyses which allow us to find unity in diversity. But we are simply pointing out that the proposals on the table haven't worked. Abstractness has a cost: the more unverifiable unobservables, the greater the explanatory payoff we expect. Judging the point where explanatory superstructure becomes epicyclic and unproductive may be tough, and generative and non-generative camps clearly have different thresholds here. But the increasingly abstruse theoretical apparatus is like a spiralling loan that risks never being paid by the theory's meagre empirical income (cf. Edelman & Christiansen
<xref ref-type="bibr" rid="ref15">2003</xref>
). Even attempts to deal with the growing evidence of variability through the theory of parameters – projecting out diversity by a limited number of “switches” pre-provided in Universal Grammar (UG) – has empirically collapsed (Newmeyer 2004, p. 545), a point largely undisputed by our commentators (although
<bold>Rizzi</bold>
continues to use the notion – see the discussion of Subjacency in sect. R6.8).</p>
<p>All sciences search for underlying regularities – that's the game, and there is no branch of linguistics (least of all historical linguistics, with its laws of sound change) that is not a player. For this reason
<bold>Harbour</bold>
's commentary misses the target – of course some middle level generalizations about the semantics of grammatical number are valid in any framework (although his account of the plural seems to not generalize beyond three participants, and there are additional problems that we discuss in sect. R6.4). The art is to find the highest level generalization that still has empirical “bite.”</p>
</sec>
<sec id="sec2-4">
<label>R.2.4.</label>
<title>Recognizing structural diversity is not incompatible with seeking general laws</title>
<p>The criticisms by
<bold>Nevins, Pesetsky</bold>
, and
<bold>Smolensky & Dupoux</bold>
– that we are not interested in seeking deeper laws behind the surface variation in linguistic structure – reveal a failure to understand the typological/functional approach. In a coevolutionary model the underlying regularities in the cross-linguistic landscape are sought in the vocal-auditory, cognitive, sociolinguistic, functional, and acquisitional selectors which favor the development of some structures over others. The goal is to seek a constrained set of motivated selectors (each testable) that filter what structures can be learned, processed, and transmitted. The stochastic nature of the selection process, and the interaction and competition between multiple selectors, accounts for the characteristic balance we find, of recurrent but not universal patterns with marked diversity in the outliers.</p>
<p>Phonological structures, for example, will be favored to the extent that they can be easily said, easily heard, and easily learned.
<xref ref-type="fn" rid="en02">
<sup>2</sup>
</xref>
But these targets regularly conflict, as when streamlined articulation weakens perceptual contrasts or creates formal alternations that are harder to learn. In fact it has been a key insight of optimality theory (OT) that many competing factors need to be juggled, but that not all are equally potent and most can be “non-fatally violated.” The different weightings of these “constraints” generate a kaleidoscope of language-specific configurations, and modelling their interaction has been a strong appeal of the OT program. But the constraints identified by OT are more fruitfully treated as the sorts of scalar processing effects sketched in
<bold>Haspelmath</bold>
's commentary. The typological sweep, like OT phonology, aims at a comprehensive documentation of all such constraints and their interactions, finding languages in which individual effects can best be isolated or recombined, with laboratory phonology studying why each effect occurs.</p>
<p>The line of attack that “languages share structural similarities often masked by one of their differences” (
<bold>Pesetsky</bold>
) thus misses the point of why it is useful to confront diversity head on. Like generative theory, the program we have outlined seeks to discover the general behind the particular. But it differs in where we seek the general laws. For our generativist critics, generality is to be found at the level of structural representation; for us, at the level of process. Our claim, in Darwinian mode, is that the unity of evolutionary mechanisms can best be discerned by reckoning with the full diversity of evolutionary products and processes.</p>
</sec>
<sec id="sec2-5">
<label>R.2.5.</label>
<title>Non-abstract representations preserve information</title>
<p>
<bold>Rizzi</bold>
suggests that the typologist's strategy of using an “extremely impoverished, non-abstract descriptive apparatus” that takes diversity at face value in representing phenomena will have less success than the generative program in establishing universal patterns. Yet, as the burden of explanation for cross-linguistic patterning is moved out of the prewired mind and into the evolution of individual language systems under selection from the sorts of factors outlined earlier, the most appropriate mathematical models employ stochastical and continuous methods rather than the discrete methods that have characterized the generative tradition (Pierrehumbert et al. 2000). And once we employ these methods, there are positive benefits in “directly measuring the variation, instead of reducing it” (Bickel
<xref ref-type="bibr" rid="ref6">2009</xref>
): any other strategy risks degrading the information on which the methods are based.</p>
<p>Take the question of how perceptual discriminability and articulatory ease interact in the domain of vowel dispersion over the formant space to favor the emergence of some vowel systems over others. The classic study by Liljencrants and Lindblom (
<xref ref-type="bibr" rid="ref41">1972</xref>
) simulated the evolution of vowel systems over time under these twin selectional pressures and compared the results to the distribution of attested vowel inventories. The insights yielded by their model would not have been possible if the descriptions of vowel systems had been in terms of discrete binary features such as [front] and [round] rather than in terms of position in a continuous three-dimensional space based on formant frequencies.</p>
<p>Staying close to the surface thus avoids the essentializing fallacy critiqued by
<bold>Goldberg</bold>
and
<bold>Croft</bold>
, while retaining the maximum information for matching against stochastic models of how general evolutionary processes interact to produce a scatter of different structural outcomes across the language sample.</p>
</sec>
<sec id="sec2-6">
<label>R.2.6.</label>
<title>Neglect of “formal universals”</title>
<p>We are criticized by
<bold>Nevins</bold>
for neglecting “formal universals” – “the analytic elements that human minds employ in constructing representations of sound structure … the available data structures (e.g., binary features, metrical grids, autosegmental tiers) and the possible operations on them that can be used in constructing a grammar of a language.” (See also our discussion in sect. R6.8 of Subjacency, as raised by
<bold>Smolensky & Dupoux</bold>
,
<bold>Freidin</bold>
, and
<bold>Rizzi</bold>
.)</p>
<p>Data structures like these have undoubted value in constructing formal representations of phonological phenomena. But, first, it does not follow that they are the actual representations that humans learn and use. As
<bold>Tomasello</bold>
and
<bold>Bavin</bold>
argue, increasingly powerful general pattern learning mechanisms suggest that many of the relevant phenomena can be managed without needing the representations that
<bold>Nevins</bold>
advocates. Second, even if such structures prove to have psychological reality, it does not follow that we are natively endowed with them. Take the general issue of discrete combinatoriality – the fact that languages recombine discrete units like consonants and vowels – which is relevant both to binary features (like±consonantal) and, in many models, the separation of consonantal and vocalic elements onto distinct autosegmental tiers.
<xref ref-type="fn" rid="en03">
<sup>3</sup>
</xref>
Zuidema and De Boer (
<xref ref-type="bibr" rid="ref57">2009</xref>
) have used evolutionary game theory simulations to investigate the hypothesis that combinatorial phonology results from optimizing signal systems for perceptual distinctiveness. Selection for acoustic distinctiveness, defined in terms of the probability of confusion, leads along a path of increasing fitness from unstructured, holistic signals to structured signals that can be analyzed as combinatorial. Some very general assumptions – temporal structuring of signals and selection for acoustic distinctiveness – lead over time to the emergence of combinatorial signals from holistic origins.</p>
<p>Should linguists use binary features and autosegmental tiers in the grammars and phonological descriptions they write? Sure, whenever they are useful and elegant. Do we need them to draw on a single, universal feature set to account for the mental representations that speakers have? Probably not, judging by the direction in which the psycholinguistic learning literature is headed. Do we need them to account for why languages all exhibit discrete combinatoriality? No – this can emerge through the sorts of processes that Zuidema and De Boer have modelled. Intriguingly, an empirical parallel has been identified in one new sign language: Meir et al. (in press) and Sandler et al. (
<xref ref-type="bibr" rid="ref51">2009</xref>
) show that duality of patterning has only been gradually emerging over three generations of one Bedouin sign language variety.</p>
</sec>
<sec id="sec2-7">
<label>R.2.7.</label>
<title>An underanalyzed Wunderkammer of variation</title>
<p>A number of commentators charge us with producing a
<italic>Wunderkammer</italic>
of exotica (
<bold>Pesetsky</bold>
), intended more to dazzle rather than illuminate. Pesetsky and
<bold>Tallerman</bold>
suggest that if properly analyzed these exotica will turn out just to be ordinary, universal-conforming languages. Both take up the issue of constituency, and argue that recent research finds it subtly present in languages previously claimed to lack it. A clarification is in order. There are two potential issues:
<list list-type="number">
<list-item>
<label>a.</label>
<p>Is constituency universal in the sense that all languages exhibit it somewhere in their systems, if even marginally?</p>
</list-item>
<list-item>
<label>b.</label>
<p>Is constituency universal in the sense that all languages use it as the main organizational principle of sentence structure and the main way of signalling grammatical relations?</p>
</list-item>
</list>
</p>
<p>Our target was (b) – different languages use different mixes, as has been well-modelled by approaches like LFG; but our commentators tend to target (a).</p>
<p>
<bold>Pesetsky</bold>
points out that Tlingit may after all have an initial slot into which constituents can be systematically shifted (we would need to know actually what can go there, and if that is actually predicted by a constituency analysis). But he is wrong in presenting Warlpiri as the “free word order language par excellence.” It is well known that Warlpiri places its auxiliary after the first constituent, and that when words are grouped together into a contiguous NP only the last word needs to carry case, instead of the usual patterning of inflecting every word. Neither of these properties, however, are found in Jiwarli (Austin & Bresnan 1996), which is why we chose it as our example.</p>
<p>The point about free word order languages, whether or not they have small islands of constituency, is that they
<italic>cannot be parsed by a constituency-based algorithm</italic>
as in most NLP (natural language programming) today, because they do not use constituency as the systematic organizing principle of sentence structure. If constituency is not the universal architecture for sentence structure, then the entire generative apparatus of c-command, bounding nodes, subjacency, and so forth collapses, since all are defined in terms of constituency. In this way
<bold>Tallerman</bold>
is wrong in thinking that parsing free word order is just like parsing English discontinuous constructions – the latter are allowed by rule, which sets up precise expectations of what comes next in what order.</p>
<p>Incidentally, the reader should note the argumentation of these rejoinders: that we, Evans & Levinson (E&L), have cherry-picked exotic facts about language A, but look, language B yields to the normal universal analysis, so there's no reason to take A seriously. Since absolute universals can be falsified by a single counterexample, it is a logical fallacy to defend a universal by adducing facts from some other language which happens not to violate it.</p>
<p>The seven general charges we have discussed capture, we think, most of the sources of disagreement.
<bold>Freidin</bold>
's commentary in particular indicates the deep rift in contemporary linguistics between Chomskyans and the rest, which ultimately rests on different judgements about the interlocking of theory and evidence. This is regrettable, as generative grammar has served to open up the “deep unconscious” of language as it were, showing how languages are organized with far greater complexity than had hitherto been imagined. While Chomskyans have presumed that these complexities must be innate, we have argued that there are two blind watchmakers: cultural evolution acting over deep time, and genetic infrastructure, which for the most part, of course, will not be specific to language.</p>
<p>Finally, let us note that Chomsky's own position makes it clear that the generative enterprise simply has a different target than the program we are trying to promote, namely, (in our case) working out the implications of language diversity for theories of cognition and human evolution. The following recent quote makes this clear:
<disp-quote>
<p>Complexity, variety, effects of historical accident, and so on, are overwhelmingly restricted to morphology and phonology, the mapping to the sensorimotor interface. That's why these are virtually the only topics investigated in traditional linguistics, or that enter into language teaching. They are idiosyncrasies, so are noticed, and have to be learned. If so, then it appears that language evolved, and is designed, primarily as an instrument of thought. Emergence of unbounded Merge in human evolutionary history provides what has been called a “language of thought,”
<italic>an internal generative system that constructs thoughts of arbitrary richness and complexity, exploiting conceptual resources that are already available or may develop with the availability of structured expressions</italic>
. (Chomsky
<xref ref-type="bibr" rid="ref10">2007</xref>
, p. 22; our emphasis)</p>
</disp-quote>
</p>
<p>On this view, UG primarily constrains the “language of thought,” not the details of its external expression. The same conclusion was stoically reached by Newmeyer (2004, p. 545): “Typological generalizations are therefore phenomena whose explanation is not the task of grammatical theory. If such a conclusion is correct, then the explanatory domain of Universal Grammar is considerably smaller than has been assumed in much work in the Principles-and-Parameters approach” and Chomsky (
<xref ref-type="bibr" rid="ref10">2007</xref>
, p. 18) seems in part to concur: “Diversity of language provides an upper bound on what may be attributed to UG.”</p>
<p>These then are simply different enterprises – Chomsky is concerned with the nature of recursive thought capacities, whereas linguistic typology and the non-generative linguists are concerned with what external language behavior indicates about the nature of cognition and its evolution. We have argued that the latter program has more to offer cognitive science at this juncture in intellectual history. Perhaps a mark of this is that our cross-linguistic enterprise is actually close to Chomsky's new position in some respects, locating recursion not as a universal property of (linguistic) syntax, but as a universal property of language use (pragmatics, or mind) – a fact, though, that emerges from empirical cross-linguistic work.</p>
</sec>
</sec>
<sec id="sec3">
<label>R3.</label>
<title>How much of the design space is populated?</title>
<p>
<bold>Pinker & Jackendoff</bold>
point out no doubt correctly that the possible design space for human languages is much greater than the space actually explored by existing languages. Two basic questions arise: (1) What exactly are the dimensions of the possible design space, of which just one corner is actually occupied? (2) What exactly does this sequestration in a small corner imply?</p>
<p>Before we get too excited by (1), we should consider (2).
<bold>Pinker & Jackendoff</bold>
imply that languages are locked into the corner by intrinsic, innate constraints, and that's why we don't find languages with really outlandish properties. But there is a fundamental fact they have overlooked. The earliest modern human remains date back to about 200,000 BP, and outside Africa date from only 100,000 years or so ago. If that is the date of the great diaspora, there has been relatively little time for diversification. Let us explain.</p>
<p>We have to presume that most likely all the languages we have now are descended by cultural evolution from a single ancestral tongue (it would take an event of total spoken language loss to be otherwise – not impossible, but requiring a highly unlikely scenario, such as an isolated lineage descended from a deaf couple). Now consider the following surprising fact. The structural properties of language change on a near-glacial time scale. In an ongoing study using Bayesian phylogenetics, Dunn et al. (in preparation) have found that taken individually, a structural feature within a single large language-family like Austronesian changes on average just once about every 50,000 years.
<xref ref-type="fn" rid="en04">
<sup>4</sup>
</xref>
What that implies is that
<italic>all the languages we now sample from are within structural spitting distance of the ancestral tongue!</italic>
It is quite surprising in this light that typologists have been able to catalogue so much linguistic variation. Once again, a coevolutionary perspective is an essential corrective to the enterprise.</p>
<p>So whether we need a lot of further explanation for the fact that languages seem to be cultivating the same garden (
<bold>Tallerman</bold>
), to the degree that this can be shown, depends crucially on the extent to which you think the languages of the world are independent experiments. Francis Galton, who stressed the need for genealogical independence in statistical sampling, would urge caution!</p>
<p>Let us turn now to the properties of the design space.
<bold>Pinker & Jackendoff</bold>
point out that we set aside a rich set of functional universals on the grounds that they are definitional of language (a move we borrowed directly from Greenberg). Of course it is not trivial that these seem shared by all human groups (although very little empirical work has actually been done to establish this – but see, e.g., Stivers et al.
<xref ref-type="bibr" rid="ref54">2009</xref>
). We think that there is a clear biological infrastructure for language, which is distinct from the structural properties of language. This consists of two key elements: the vocal apparatus and the capacity for vocal learning, on the one hand (both biological properties unique in our immediate biological family, the Hominidae), and a rich set of pragmatic universals (communicative intention recognition prime among them), on the other. This is the platform from which languages arise by cultural evolution, and yes, it limits the design space, like our general cognitive and learning capacities (
<bold>Christiansen & Chater</bold>
). We emphasized that those interested in the evolution of the biological preconditions for language have been looking in the wrong place: Instead of looking at the input-output system (as Philip Lieberman has been urging for years; see, e.g., Lieberman
<xref ref-type="bibr" rid="ref40">2006</xref>
), or the pragmatics of communicative exchange, they've been focussed on the syntax and combinatorics, the least determined part of the system, as demonstrated by linguistic typology.</p>
<p>A functional perspective has been a long running undercurrent in typological and descriptive linguistics, as
<bold>Croft</bold>
and
<bold>Goldberg</bold>
remind us. Goldberg suggests that the design space is highly constrained by systems motivations; for example, pressures to keep forms distinct while staying within the normal sound patterns of a language. These pressures provide explanations for the internal coherence of language structure, a perspective that is indeed necessary to explain how languages are not for the most part a heap of flotsam and jetsam accumulated during cultural evolution, but rather, beautifully machined systems, with innovations constantly being adjusted to their functions.</p>
<p>Returning to the question of how saturated or otherwise the design space is,
<bold>Pinker & Jackendoff</bold>
maintain it is easy to think of highly improbable but possible language types, and they suggest a few. Quite a few of these simply fail on the functional front – they are unproductive like their Cadda, or limited in expressiveness like their Bacca, and groups confined to speaking such languages would rapidly lose out to groups with more expressive systems. Daffa, the quantificational-logic language, lacks any form of deictics like “I,” “‘you,” “this,” “now,” or “here”: The presence of some deictics is certainly a functional universal of human language and follows from the emergence of human language from interactional, socially situated transactions.</p>
<p>Interestingly, though, some natural languages do have properties that partake of
<bold>Pinker & Jackendoff</bold>
's thought experiments. For example, their imaginary Cadda, a language of one word holophrases, lacks double articulation. The three-generation sign language of Al Sayyid is also said to lack double articulation (Meir et al., in press; Sandler et al.
<xref ref-type="bibr" rid="ref51">2009</xref>
), showing that this has to arise by cultural evolution: it is not given by instinct.</p>
<p>The musical language Gahha, likewise, isn't too far off attested reality. The West Papuan language Iau (Bateman
<xref ref-type="bibr" rid="ref2">1986a</xref>
;
<xref ref-type="bibr" rid="ref3">1986b</xref>
;
<xref ref-type="bibr" rid="ref4">1990a</xref>
;
<xref ref-type="bibr" rid="ref5">1990b</xref>
) has eight phonemic tones (including melodic contours), close to the number of phonemic segments, and uses them both for lexical distinctions and for grammatical distinctions including aspect, mood, and speech-act distinctions; other tone languages use pitch to indicate modality or case (e.g., Maasai).</p>
<p>Nor is the “rational” Fagga too far “outside the envelope.” Sure, it would require a level of semantic factorization down to a number of combinable semantic components not larger than the number of phonemes, but some semantic theories posit a few score “semantic primitives” in terms of which all meanings can be stated (e.g., Goddard & Wierzbicka
<xref ref-type="bibr" rid="ref21">2002</xref>
), and Demiin, the Lardil initiation language, maps the entire lexicon down to around 150 elements (Hale
<xref ref-type="bibr" rid="ref22">1982</xref>
). Combine Demiin semantics with !Xóõ phonology (159 consonant phonemes on some analyses), pair one semantic element to each phoneme, and Fagga might just squeak in.
<xref ref-type="fn" rid="en05">
<sup>5</sup>
</xref>
Whether or not it then actually existed would depend on whether a possibly evolutionary route past the “historical filters” could be found – in other words whether an evolutionary pathway could exist to reach this highly economical mapping of meaning elements onto phonological segments.</p>
<p>Finally, it is salutary to recollect that it is only relatively recently that we have come to recognize sign languages as fully developed languages with equal expressive power to spoken languages. These languages with their easy access to iconicity and analog spatial coding break out of the design space restricted by the strictly linear coding of the vocal-auditory channel. The typology of these languages is still in development, and there are plenty of surprises yet to come (see Meir et al., in press; Zeshan 2006a; 2006b).</p>
</sec>
<sec id="sec4">
<label>R4.</label>
<title>Language variation and the future directions of cognitive science</title>
<sec id="sec4-1">
<label>R.4.1.</label>
<title>Is history bunk? Linguistics as a science of change</title>
<p>
<disp-quote>
<p>
<italic>History is more or less bunk. It's tradition. We don't want tradition. We want to live in the present and the only history that is worth a tinker's dam is the history we make today.</italic>
</p>
<attrib>— Henry Ford (Interview in
<italic>Chicago Tribune</italic>
, May 25, 1916)</attrib>
</disp-quote>
</p>
<p>
<bold>Nevins</bold>
’ dismissal of the coevolutionary approach we are advocating as “hand-waving at diachronic contingencies” hints at another kind of dissatisfaction with the general program we outlined in the target article: the suspicion that we advocate an old-fashioned historical and cultural approach, which will return linguistics wholly to the humanities. The antipathy to history is based on the view that (a) it is the study of particularities, whereas we should be in the business of generalizing, (b) it cannot be predictive, while any empirical science should make falsifiable predictions.</p>
<p>But the study of evolution is centrally about history, the study of the match between organisms and environment over time, and few would doubt its scientific credentials. And modern linguistics began as a historical discipline, that rapidly felt able to announce laws of historical change, while recent sociolinguistics has been able to catch language change in the making.</p>
<p>A fundamental shift is that modern computational methods have revolutionized the possibility of studying change in systems using more and more complex and realistic simulations. Within the study of evolution, computational cladistics exploits this to the full, using, for example, Bayesian inference to run millions of simulations and Monte Carlo Markov chains to search for the optimum model that predicts back the data with the greatest likelihood. We can make history today, as Henry Ford thought we should.</p>
<p>In the coda of the target article (sect. 8) we sketched a set of future directions for the language sciences based on evolutionary ideas, and these new methods put those directions within our grasp right now. Take the idea stated in thesis (3), that recurrent clustering of solutions will occur in grammars of non-closely related languages – such a claim can be tested by simulations. Equally tractable is the idea that changes cascade (thesis [4]), so that a few crucial ones may land a language in a gully of future developments. Thesis (5) about coevolution between brain, vocal organs, and language has already begun being intensively explored by simulation (Christiansen & Chater 2008; Christian et al. 2009). Thesis (7) suggests that we should investigate how the full range of attested language systems could have arisen – pie in the sky without computational simulation, but now thinkable. For example, we could follow Bickerton (1981) and start with a simple Creole-like language, described by a set of formal features or characters, and use the rates and parameters of character change derived from recent work on the Bayesian phylogenetics of language families to simulate cultural evolution over more than 100,000 years. Do we derive patterns of diversity like we now see, or would we need to model special historical circumstances such as massive hybridization?</p>
<p>
<bold>Smolensky & Dupoux</bold>
ignore the recent synthesis of biological and cultural evolution. Thus they assert “language is more a biological than a cultural construct.” We would go further: “language is one hundred percent a biological phenomenon.” It is absurd to imagine that humans by means of culture have escaped the biosphere – we are just a species with a very highly developed “extended phenotype” or “niche construction” (Laland et al.
<xref ref-type="bibr" rid="ref38">1999</xref>
), using culture to create conditions favorable to our survival. The twin-track approach to human evolution that we sketched (derivatively from, e.g., Boyd & Richerson 1985; Durham 1991) tries to explicate this, unifying perspectives on history and phylogeny as the science of likely state changes in a population. There is immense room for future theoretical and modelling work here: without it we are not going to understand how we as a species evolved with the particular cognitive capacities we have.</p>
</sec>
<sec id="sec4-2">
<label>R.4.2.</label>
<title>Learning and development</title>
<p>A number of commentators stress how two further avenues of research will help to situate our understanding of human cognition in a biological context: human development, and comparative psychology across species. For reasons of space, and reflecting the limits of our own expertise, we underplayed the crucial role of learning and cognitive development that is presupposed by the linguistic variation we outlined. These commentators offer a valuable corrective, summarizing the human and cross-species literature. They show how much more powerful are the learning mechanisms we can now draw on than the basic associationist models available in the 1950s when Chomsky argued that their lack of power forced us to postulate rich innate endowments for language learning. Indeed, the combined arguments put forth by the commentators go some way towards providing a solution to a problem we left unanswered at the end of section 7 of the target article: accounting for how language learning is possible in the face of the levels of diversity we describe.</p>
<p>
<bold>Bavin</bold>
does a good job of reminding readers what the basic issues are here, and especially the central debate over the domain-specificity of language learning.
<bold>Tomasello</bold>
observes that the Chomskyan argument about the unlearnability of language structure crucially relies on the assumption of a simple association learning: once we take into account the rich context of communication, with shared attention and intention interpretation, not to mention capacities for analogy and statistical learning, the argument fails.
<bold>Catania</bold>
also refers to work on other species showing that category discrimination can be triggered right across the board by a single new stimulus. Catania,
<bold>Christiansen & Chater</bold>
, and
<bold>Merker</bold>
all stress the funnelling effects of the learner bottleneck via “C-induction”: In Merker's words. “cultural transmission delivers the restricted search space needed to enable language learning, not by constraining the form language takes on an innate basis, but by ensuring that the form in which language is presented to the learner is learnable.” Catania suggests we explicitly incorporate a “third track” – acquisition – into our coevolutionary model, but we would prefer to maintain it as one (albeit powerful) set of selectors on linguistic structure alongside the others we outline in our article.</p>
<p>A number of commentators dwelt on Chomsky's “poverty of the stimulus” argument for rich innate language capacities.
<bold>Bavin</bold>
points out that the complex sentential syntax that motivates the argument is learnt so late that the child has wide experience of language on which to build. Perhaps the neatest refutation is provided by
<bold>Waterfall & Edelman</bold>
, who note a crucial property of the linguistic input to children: namely, repetition with minor variation, which draws attention to the structural properties of strings, exhibiting for the infant the “transformations” of Zellig Harris. They show how learning algorithms can effectively use this information to bootstrap from unsegmented text to grammatical analysis.</p>
<p>
<bold>McMurray & Wasserman</bold>
correctly point out that our position radically moves the goal posts for language learning, suggesting that not only are a slew of specialized learning strategies employed to learn a language (and these commentators provide very useful bibliographic leads here), but
<italic>which</italic>
of these strategies is deployed may depend on the language being learnt. We don't necessarily learn Yélî Dnye with its 90 phonemes, flexible phrase order, and widespread verb suppletion using the same strategies we use for English: As McMurray & Wasserman write, “humans … assemble language with a variety of learning mechanisms and sources of information, this assembly being guided by the particularities of the language they are learning.” Instead of talking about the passive acquisition of language, we should be talking about the active construction of many different skills. This perspective buries the idea of a single language acquisition device (LAD).</p>
<p>
<bold>Christiansen & Chater</bold>
, as well as
<bold>Catania</bold>
, emphasize that learning in development is the crucial filter through which languages have to pass. Languages have to be good to think with (to modify an adage of Levi-Strauss), otherwise they won't make it. Christiansen & Chater have described (both in their 2008
<italic>BBS</italic>
article [see
<italic>BBS</italic>
31(5)] and in Christiansen et al.
<xref ref-type="bibr" rid="ref11">2009</xref>
) interesting modelling that shows that the learning filter must be largely language-independent, and thus that properties of learning are unlikely to have evolved specifically for language. This is a new kind of evidence against the position taken by
<bold>Pinker & Jackendoff</bold>
that language-specific learning principles must be involved in the acquisition of language.</p>
<p>Finally, we would like to draw attention to one other crucial aspect of development, namely, the way in which the environment is known to modulate developmental timing in the underlying biology of organisms, so that phenotypic variation can be achieved from the same genotype (“phenotypic plasticity”), and conversely, phenotypic identity can be obtained from variant genotypes (“developmental buffering”). In the conclusion to our target article we drew attention to the extraordinary achievement that is culture – generating phenotypic difference where there is no genetic difference, and phenotypic identity where there is genetic difference. These issues have been much explored in the biological literature on epigenesis and development (see West-Eberhard [
<xref ref-type="bibr" rid="ref56">2003</xref>
] for a fascinating overview).</p>
</sec>
<sec id="sec4-3">
<label>R.4.3.</label>
<title>The comparative perspective across species</title>
<p>Our other major omission, as some commentators noticed, is the lack of reference to the comparative psychology of other species.
<bold>Margoliash & Nusbaum</bold>
appeal to linguists and others interested in the evolution of language to “cast off the remaining intellectual shackles of linguistic speciesism” and take the findings of animal research more into account. They usefully remind us of the importance of the relationship between perceptual and motor skills.
<bold>Merker</bold>
notes how findings about complex learned birdsong can explain how a prelinguistic human adaptation for emancipated song could provide a mechanism for sustaining and elaborating string transmission, even if this was timed before the full emergence of social cognition: it can be driven by the need to impress by elaborate vocal display even when not yet used to communicate meaning. Darwin (
<xref ref-type="bibr" rid="ref12">1871</xref>
) had, of course, imagined that language evolved from song (see Fisher & Scharff
<xref ref-type="bibr" rid="ref18">2009</xref>
; Fitch
<xref ref-type="bibr" rid="ref19">2006</xref>
, for an update).</p>
<p>
<bold>Penn, Holyoak, & Povinelli</bold>
(
<bold>Penn et al.</bold>
) point out that our demonstration of the variability in language, and the implication that there is no simple innate basis for it, has interesting implications for a central issue in comparative psychology: what exactly is the Rubicon which divides us from apes? If the crucial ingredient was a chance language gene or the genetic substrate for UG, it might be possible to argue that language alone is responsible for the sea-change in our cognition. But if there is no such magic bullet, then languages must be learnt by principles of general cognition, and the Rubicon must be constituted by more fundamental and more general differences in cognition.</p>
<p>
<bold>Penn et al.</bold>
err, though, when they try to extend the argument to downplay Tomasello's (2008) thesis that the crucial divide is the special assemblage of abilities that make up the pragmatic infrastructure for human language. Tomasello's assemblage of specialized social cognition is precisely what we need to explain the genesis of language diversity – it provides a general platform both for language learning and for the elaboration of distinct systems. Still, bringing their point together with those by
<bold>Margoliash & Nusbaum</bold>
and
<bold>Merker</bold>
is a useful reminder that we need to account both for the emergence of
<italic>patterned form</italic>
(where cross-species studies of sophisticated vocalizers must take on greater importance) and of
<italic>productive meaning</italic>
(where social cognition is likely to remain the main driver).</p>
<p>
<bold>Penn et al.</bold>
see in our display of language variation more evidence for their identification of a major discontinuity between apes and humans in the capacity for relational thought. If this capacity is not introduced by a single new evolved trait, human language, then the gulf is a feature of general cognition. But we note two caveats here: First, in our very nearest cousins (chimps and bonobos), there are pale shadows of relational thinking (Haun & Call
<xref ref-type="bibr" rid="ref23">2009</xref>
). Second, no one doubts the importance of language in delivering ready-made relational concepts (Gentner
<xref ref-type="bibr" rid="ref20">2003</xref>
). Beyond that, we probably agree about the facts, but might value them differently: Is 10% continuity with chimps a telling bit of continuity, or is 90% discontinuity a hopeless Rubicon?</p>
</sec>
</sec>
<sec id="sec5">
<label>R5.</label>
<title>Situating language and cognition in the biology of variation</title>
<p>Science moves in new directions blown by winds of different kinds – Kuhnian collapses, new technologies, new integrative insights, newly developing fields, funding biases, even boredom with old paradigms. We think it is pretty clear that for a mix of these reasons, the cognitive sciences are about to undergo a major upheaval. Classical cognitive science was based on a mechanistic analogy with a serial computational device, where serial algebraic algorithms could represent models of the mind. A simplifying assumption was made at the outset: we need only characterize one invariant system. That is, the human mind is essentially an invariant processing device, processing different content to be sure, but running the same basic algorithms regardless of its instantiations in different individuals with different experiences, different environments, and different languages (cf.
<bold>Smolensky & Dupoux</bold>
's “a universal principle is a property true of all minds”).</p>
<p>This view has taken a number of knocks in the last twenty years; for example, from the success of parallel computational models and the rise of the brain sciences. The brain sciences were at first harnessed to the classical enterprise, with invariance sought beneath individual variation in brain structure and function through selecting only right-handed or male subjects, pooling data, and normalizing brains. But cognitive neuroscience has increasingly broken free, and now the range of individual biological variation is a subject of interest in its own right.</p>
<p>Pushing this development is genetics. It is now feasible to correlate brain structure and function with scans across half a million single nucleotide polymorphisms (SNPs) or genetic markers. We already know detailed facts about, for example, the alleles that favor better long-term memory (Papassotiropoulos et al.
<xref ref-type="bibr" rid="ref46">2006</xref>
), and we are well on the way to knowing something about the genetic bases of language (Fisher & Marcus 2006, Vernes et al. 2008). On the processing side, we know that about 8% of individuals have right-lateralized language, that individuals differ markedly in the degree of language lateralization, and that on specific tasks about 10% of individuals may not show activation of the classic language areas at all (Müller
<xref ref-type="bibr" rid="ref44">2009</xref>
). (True, most individuals will have circuitry special to language, as
<bold>Pinker & Jackendoff</bold>
remark, but that may be only because using language bundles specific mental tasks, and because adults have built the circuitry in extended development.) We even have preliminary evidence that gene pools with certain biases in allele distribution are more likely to harbour languages of specific sorts (Dediu & Ladd 2007). We are not dealing, then, with an invariant machine at all, but with a biological system whose evolution has relied on keeping variance in the gene pool.</p>
<p>This research is going to revolutionize what we know about the mind and brain and how it works. By putting variation central, as the fuel of evolution, it will recast the language sciences. Some aspects of the language sciences are pre-adapted to the sea-change –sociolinguistics, dialectology, historical linguistics, and typology – provided they can take the new mathematical methods on board. But we can look forward to the new psycholinguistics, centrally concerned with variation in human performance in the language domain both within and across language groups, and the new neurocognition of language which will explore both the varying demands that different languages put on the neural circuitry and the way in which superficial phenotypic standardization is achieved by distinct underlying processing strategies in different individuals.</p>
<p>In this context, renewed interest in the variation in human experience and expertise, in the cultural contexts of learning, and the diversity in our highest learned skill – language – is inevitable. For the cognitive and language sciences to engage with these developments, a first step is to take on board the lessons of those linguistic approaches that place variation and process at centre stage. Then the very diversity of languages becomes no longer an embarrassment but a serious scientific resource. That is the message we have been trying to convey.</p>
</sec>
<sec id="sec6">
<label>R6.</label>
<title>Appendix: Disputed data and generalizations</title>
<sec id="sec6-1">
<label>R.6.1.</label>
<title>Kayardild nominal tense</title>
<p>The occurrence of tense on Kayardild nominals was cited by us as a counterexample to Pinker and Bloom's (1990) claim that languages will not code tense on nominals.
<bold>Baker</bold>
's commentary does not dispute this, but then tries to use it to establish an orthogonal issue, namely, his verb-object constraint (see sect. R.6.10). While it is true that in Kayardild, tense appears on objects rather than subjects, it is not hard to find other languages, such as Pitta-Pitta (Blake
<xref ref-type="bibr" rid="ref7">1979</xref>
), where it is the subject rather than the object that codes for tense – so the general phenomenon gives no succor to Baker's hoped-for universal. Needless to say, all this only reinforces the fact that tense can occur on nominals.</p>
</sec>
<sec id="sec6-2">
<label>R.6.2.</label>
<title>Positionals and ideophones</title>
<p>We noted in the target article that not only are the “big four” word classes (noun, verb, adjective, adverb) not wholly universal, but there were plenty of other word classes out there, including positionals and ideophones. We used the example of Mayan positionals.
<bold>Pesetsky</bold>
is right that Mayan positionals are classically defined as a root class, not a stem class, but the facts are actually more complex (see, e.g., Haviland
<xref ref-type="bibr" rid="ref25">1994</xref>
). Positionals have their own unique distribution at the stem level too, occurring, for example, in a special class of mensural classifiers (de Léon 1988), body-part constructions (Haviland
<xref ref-type="bibr" rid="ref24">1988</xref>
, p. 92) and color-plus-position constructions (Haviland, submitted). In any case, many languages from around the world (such as Yélî Dnye; Levinson 2000) have positionals as a special word class with their own distinctive distributions. (See Ameka and Levinson [2007] for detailed examples and a typology.)</p>
<p>
<bold>Pesetsky</bold>
similarly tries to undermine the status of ideophones/expressives as a word class (the terms are more or less synonymous, but come from different linguistic descriptive traditions). He correctly notes that Osada (1992) does not discuss their syntax in Mundari, and this reflects a general neglect of their syntactic characteristics in linguistic descriptions, apart from simplistic characterizations of them as “syntactically unintegrated.” However, a careful discussion of the syntax of the functionally similar class of
<italic>expressives</italic>
in another Austroasiatic language, Semelai, can be found in Kruspe (
<xref ref-type="bibr" rid="ref37">2004</xref>
): their syntactic distribution closely parallels that of direct speech complements. Likewise in Southern Sotho (Molotsi
<xref ref-type="bibr" rid="ref43">1993</xref>
), ideophones pattern like complements of “say,” with the further property that they can be passivized, so that “John snatched the woman's purse” is literally “John said snatch woman's purse,” which can be passivized as “snatch was said woman's purse.” In short, ideophones and expressives have a syntax, if sometimes limited.</p>
</sec>
<sec id="sec6-3">
<label>R.6.3.</label>
<title>Straits Salish noun versus verb distinction</title>
<p>We pointed out that it was still unclear whether in fact there is a universal noun/verb distinction. We mentioned the Wakashan language Straits Salish as an example of a language plausibly claimed to lack a noun/verb distinction. Instead of presenting counteranalyses of the Straits Salish data,
<bold>Tallerman</bold>
cites data from Nuuchahnulth (Nootka), from another language family, with no demonstration that the arguments can be transferred to Straits Salish. A crucial difference between the languages is that names can be predicative in Straits Salish but not in Nootka. Tallerman's major arguments for the existence of a noun/verb distinction in Nuuchahnulth were already given in Jacobsen (
<xref ref-type="bibr" rid="ref33">1979</xref>
) and Schachter (
<xref ref-type="bibr" rid="ref53">1985</xref>
), and Jelinek (1995) takes care to show that they don't apply to Straits Salish, which is why we used Salish rather than Nootka as an example. We agree with her, though, that further investigation of the Salish case is needed (a point also articulated in Evans & Osada 2005); hence our statement that no definitive consensus has been reached.</p>
</sec>
<sec id="sec6-4">
<label>R.6.4.</label>
<title>Jemez/Kiowa number</title>
<p>
<bold>Harbour</bold>
reproaches us for attributing the “unexpected number” facts to Jemez rather than Kiowa; in fact, the languages are related and both exhibit similar phenomena (Mithun 1999, p. 81, and personal communication). We thank Harbour for picking up the factual errors he points out, but for our part would like to correct his mischaracterization of this case as our “prime example” of “something we would never think of” – it was one of many, and the rest still stand. More importantly, further cross-linguistic data disputes his claim that “singular ‘we’ arises because Winnebago uses only [± augmented].” The use of “because” here illustrates the fallacy of inferring cause from single cases. Harbour's formulation predicts that if a language uses a more elaborated grammatical number system than just [± augmented] it should not treat “1+2” as singular. Yet there are many languages which have a three-way number system and which nonetheless treat 1+2 in the same series as the singulars, like Bininj Gun-wok (Evans 2003a).</p>
</sec>
<sec id="sec6-5">
<label>R.6.5.</label>
<title>Arrernte syllable structure</title>
<p>
<bold>Nevins</bold>
, and (briefly)
<bold>Berent</bold>
, take issue with our citing Arrernte as an example of a language that defies the “Universal CV preference” by taking VC as the underlying syllable type. To contextualize their riposte, it is worth quoting Hyman (
<xref ref-type="bibr" rid="ref31">2008</xref>
, p. 13):
<disp-quote>
<p>In each of the above cases, there is no “knock-out argument.” Anyone determined to maintain [these] universals can continue to do so, the worst consequence being an indeterminate or more awkward analysis… . Architectural universals have this property: it all depends on your model and on what complications you are willing to live with.</p>
</disp-quote>
</p>
<p>
<bold>Nevins</bold>
' purported counter-analysis is of this type. To make it work is not just a matter of allowing onset-sensitive morae, not a problem in itself, but also of leaving the coda out of weight considerations, which is more problematic. Moreover, he only considers some of the phenomena that Breen and Pensalfini (1999) cite – such as the fact that the language game known as “Rabbit Talk” picks out exactly the VC syllable to move to the end of the word – and ignores the arguments they give for postulating an initial vowel in words which start with a C when pronounced in isolation; namely, that this vowel appears when the word is not pronounced breath-group initially, and that postulating it simplifies other morphonological processes. A further argument in favor of the VC analysis (see Evans
<xref ref-type="bibr" rid="ref16">1995b</xref>
) is that although there is considerable variation in how words are pronounced in isolation (e.g., “sits” can be pronounced [an
<private-char description="ə">
<inline-graphic mime-subtype="gif" xlink:href="S0140525X09990525_char1">
<alt-text>ə</alt-text>
</inline-graphic>
</private-char>
mə], [an
<private-char description="ə">
<inline-graphic mime-subtype="gif" xlink:href="S0140525X09990525_char1">
<alt-text>ə</alt-text>
</inline-graphic>
</private-char>
m], [n
<private-char description="ə">
<inline-graphic mime-subtype="gif" xlink:href="S0140525X09990525_char1">
<alt-text>ə</alt-text>
</inline-graphic>
</private-char>
mmə], or [n
<private-char description="ə">
<inline-graphic mime-subtype="gif" xlink:href="S0140525X09990525_char1">
<alt-text>ə</alt-text>
</inline-graphic>
</private-char>
m]), the number of syllables remains constant under the VC syllable analysis (at 2 in this instance), whereas the number of syllables under other analyses remains inconstant, even with the moraic adjustments that Nevins proposes. In short, proposing VC syllables lines up beautifully with a whole range of rules, whereas adopting the alternative, while workable, is crabbed by inelegancies.</p>
<p>A deeper problem than mere inelegance in forcing a language like Arrernte into a procrustean CV bed is that it draws attention away from explaining what forces have shaped the unusual Arrernte structure. There is growing evidence from phonetic work by Butcher (
<xref ref-type="bibr" rid="ref8">2006</xref>
) that the Arrernte VC syllable represents the phonologization of a whole syndrome of phonetic and phonological effects at work in Australian languages, linking a number of phenomena like: (a) the unusual proliferation of distinctive heterorganic clusters intervocalically (e.g., nk vs. ŋk vs. ɲ
<preformat position="anchor" preformat-type="computer-code" xml:space="preserve">k</preformat>
vs. ɳ
<preformat position="anchor" preformat-type="computer-code" xml:space="preserve">k</preformat>
); (b) the large set of place contrasts for oral and nasal stops, including contrasts like alveolar versus postalveolar, that are most effectively cued by the leading rather than following vowel; (c) the neutralization of the apico-alveolar versus apico-postalveolar contrast word-initially; and (d) the widespread pre-stopping of intervocalic nasals and laterals.</p>
<p>The joint effect of all these features is to concentrate the maximum amount of contrasting information in intervocalic position, and make the leading vowel crucial for signalling the place of following consonants through F2 and F3 formant transitions. In other words, it is VC rather than CV units (or, more accurately, the continuous phonetic signals that correspond to them) which are the most informative, in terms of cueing the greater number of contrasts. This now allows us to give an insightful account of why VC syllables emerge as phonological units in some Australian languages. We would not be led to this explanation if we use too much abstract representational machinery to conjure away the existence of an aberrant pattern.</p>
</sec>
<sec id="sec6-6">
<label>R.6.6.</label>
<title>Finite state grammars and cotton-top tamarins</title>
<p>
<bold>Pullum & Scholz</bold>
pull us up for propagating a misinterpretation of the findings in Fitch and Hauser (2004), by stating that cotton-top tamarins have a general ability to learn finite state languages. We stand corrected, and urge the reader to heed Pullum & Scholz's clarification that Fitch and Hauser's findings are restricted to the much smaller subset known as SL (strictly local) languages.</p>
<p>The investigation of recursive and augmentative structures in animal cognition is a current minor industry in cognitive science. If this is meant to shed light on the human language capacity, it is arguably quite misguided. Indefinite recursion, or discrete infinity as Chomsky prefers, is not an actual property of human language – no human is capable of indefinite centre-embedding, for example. Only in the light of a radical distinction between competence and performance does this minor industry make any sense at all, and that little sense is undermined by the impossibility of testing animals directly for indefinite recursion.</p>
</sec>
<sec id="sec6-7">
<label>R.6.7.</label>
<title>Cinque's generalization about Greenberg's Universal #20</title>
<p>Specifying strict ordering in noun phrases where the noun comes last, is raised by
<bold>Rizzi</bold>
as an example of how implicational universals can be made to follow from parameterized rules. However, Dryer (
<xref ref-type="bibr" rid="ref13">2009</xref>
), drawing on a larger cross-linguistic sample, shows that you get better fit with the data if Cinque's formal categories (like Adjective) are replaced by semantic categories (like “modifier denoting size, color, etc.”). Cinque's parameterization just gives a discrete and approximate characterization of statistical trends reflecting the interaction of many functional selectors.</p>
</sec>
<sec id="sec6-8">
<label>R.6.8.</label>
<title>Subjacency and “invisible Wh-movement”</title>
<p>A number of commentators (
<bold>Smolensky & Dupoux, Freidin, Rizzi</bold>
) appealed to the Chomskyan notion of “Subjacency” as a convincing example of a highly abstract principle or rule-constraint which is manifested directly in languages like English. The idea in a nutshell is that movement of constituents is constrained so that they may not cross more than one “bounding node” in the syntactic tree (in English, bounding nodes are a NP, i.e., noun phrase, or a complementizer phrase headed by
<italic>that</italic>
). Hence you can say “What does John believe that Mary saw __?” but not “*What does John believe the rumor that Mary saw _?”.</p>
<p>Now consider
<bold>Rizzi</bold>
's point that many languages, including Chinese, do not move their Wh-words (so called in situ
<italic>Wh</italic>
) – they would stay in the corresponding slots indicated in the just provided sentences – but appear to exhibit semantic interpretations that might constitute a parallel phenomenon. The apparent lack of Wh-movement in Chinese, which at first seems an embarrassment to the theory, is claimed however to mask
<italic>covert movement</italic>
at an underlying level, close to semantic interpretation: consequently the range of construals of a Chinese Wh-question is argued to be limited by the very same abstract constraint postulated for languages with overt movement (see examples in Rizzi's commentary). For generativists, this may seem like a double scoop: Not only is the constraint of an abstract enough kind that children would find it hard to learn in English, but it even holds in Chinese where it is, in effect, invisible, so could not possibly be learnt! Moreover, it is a completely arbitrary and unmotivated constraint, so there is no apparent way for the child to infer its existence. Therefore, it must be part of UG, a quirk of our innate language organ.</p>
<p>But this in fact is not at all a convincing example to the other camp. First, to make it work in languages with and without overt “movement,” it has to be so particularized (“parameterized”) for each language so that, as we noted in the target article, the child might as well learn the whole thing (Newmeyer 2005). Second, there are perfectly good alternative models that do not use movement: Wh-words are simply generated in the right place from the start, using other methods than movement to get the correct logical interpretations. Within LFG, a combination of the FOCUS discourse function and prosodic structure can get in situ Wh interpretation with no covert movement required (Mycock
<xref ref-type="bibr" rid="ref45">2006</xref>
). Through methods like these, LFG, HPSG, and Role and Reference Grammar have all developed ways of modelling both the English syntactic constraints and the Chinese interpretation constraints without any covert operations or unlearnable constraints.</p>
<p>Van Valin (
<xref ref-type="bibr" rid="ref55">1998</xref>
) offers one of these rival explanations.
<xref ref-type="fn" rid="en06">
<sup>6</sup>
</xref>
He notes that for entirely general purposes one needs to have a notion of “focus domain” – roughly the unit that can be focussed on as new information in a sentence. A chunk like
<italic>Mary did X</italic>
is such a unit, but
<italic>the rumor that Mary did X</italic>
is not, because it marks the information as already presumed. So it makes no sense to question part of it. Focus domains have a precise structural characterization, and the informational structure of this kind explains both the English and the Chinese facts without positing covert entities or unmotivated rule constraints. Van Valin shows that the focus domains are easily learned by children from the range of possible yes/no question elliptical answers. This explanation needs the minimum equipment (a definition of focus domain) and no magic or UG.</p>
<p>Take your pick between the two explanations – an unmotivated, unlearnable, hidden constraint implying innate complex architecture, or a general design for communication requiring nothing you wouldn't need for other explanatory purposes. As C.-R. Huang (
<xref ref-type="bibr" rid="ref30">1993</xref>
) notes after discussing the Mandarin data, “there is no concrete evidence for an abstract movement account … invoking Ockham's razor would exclude movements at an abstract level.”</p>
</sec>
<sec id="sec6-9">
<label>R.6.9.</label>
<title>C-command</title>
<p>
<bold>Rizzi</bold>
claims that “no language allows coreference between a pronoun and a NP when the pronoun c-commands the NP” (*
<italic>He said that John was sick</italic>
; *
<italic>each other saw the men</italic>
). We pointed out that in languages (like Jiwarli) which lack constituency as the main organizing principle of sentence structure, notions like c-command cannot be defined (c-command is defined in terms of a particular kind of position higher in a syntactic constituency tree). But let us interpret this relation loosely and charitably, in terms of some general notion of domination or control. Then the observation would have very wide validity, but it would still be only a strong tendency. Counterexamples include Abaza reciprocals (Hewitt
<xref ref-type="bibr" rid="ref28">1979</xref>
) where the verbal affix corresponding to “each other” occupies the subject rather than the object slot, and Guugu Yimidhirr pronominalization, where it is possible to have a pronoun in the higher clause coreferential with a full NP in the lower clause (Levinson
<xref ref-type="bibr" rid="ref39">1987</xref>
).</p>
<p>Once again, then, we are dealing with a widespread but not universal pattern. The typological/functional paradigm explains it as emerging from a more general tendency in discourse (not just syntax): reference to entities proceeds with increasing generality, which is why “She came in. Barbara sat down” is not a way of expressing “Barbara came in. She sat down.” (see Levinson [2000] for a detailed Gricean account). Many languages have grammaticalized the results of this more general tendency, producing grammatical rules which can then be described by c-command (if you want to use that formalism) but also by other formalisms. Seeking the most general explanation for cross-linguistic patterning here directs us to more general pragmatic principles (“use the least informative form compatible with ensuring successful reference given the current state of common ground”), rather than in terms of a specific syntactic constraint which only applies in a subset (even if a majority) of the world's languages. Many strong tendencies across languages appear to have a pragmatic or functional base, undermining a presumption of innate syntax.</p>
</sec>
<sec id="sec6-10">
<label>R.6.10.</label>
<title>The “Verb-Object Constraint”</title>
<p>
<bold>Baker</bold>
offers his “Verb-Object Constraint (VOC)” as a proposal for a “true linguistic universal” of this high level kind – the generalization that the verb “combines” with the theme/patient before a nominal that expresses the agent/cause (“combines” is not defined, so we take it loosely). But this, too, rapidly runs afoul of the cross-linguistic facts. Note that his formulation equivocates between whether the constraint is formulated in terms of semantic roles such as agent and patient, or grammatical relations such as subject and object; some of the problems below pertain to one of these, some to the other, some to both:
<list list-type="number">
<list-item>
<label>1.</label>
<p>Many languages don't have a clear notion of subject and object (see remarks in our target article). If we avoid this problem by stating the universal in terms of thematic roles (theme, patient, agent, experiencer), then we'll find such anomalies as languages which effectively idiomatize the subject-verb combination, only combining secondarily with the patient, employing idioms like “headache strikes me/the girl” or “fever burns him” (Evans
<xref ref-type="bibr" rid="ref17">2004</xref>
; Pawley et al.
<xref ref-type="bibr" rid="ref47">2000</xref>
).</p>
</list-item>
<list-item>
<label>2.</label>
<p>Although polysynthetic languages like Mohawk usually incorporate objects rather than subjects into the verb, there are some that do incorporate transitive subjects/agents (not just objects as
<bold>Baker</bold>
's generalization would predict), most famously the Munda language Sora (Ramamurti
<xref ref-type="bibr" rid="ref48">1931</xref>
; cf. Anderson
<xref ref-type="bibr" rid="ref1">2007</xref>
).</p>
</list-item>
<list-item>
<label>3.</label>
<p>There are twice as many VSO languages as VOS languages, 14% versus 7%, respectively, in a worldwide sample by Dryer (
<xref ref-type="bibr" rid="ref13">2009</xref>
), but only VOS languages seem likely to facilitate a “combination” of verb and object.</p>
</list-item>
<list-item>
<label>4.</label>
<p>Languages with ergative syntax group the object of transitives and the subject of intransitives as one type of entity, around which the syntax is organized (
<bold>Baker</bold>
notes this as a potential problem, but doesn't offer the solution).</p>
</list-item>
</list>
</p>
<p>Taken together, these problems make the VOC just one more observation that is certainly a statistical tendency, but which it is misleading to elevate to “universal” status.</p>
</sec>
</sec>
</body>
<back>
<ack id="ack">
<title>ACKNOWLEDGMENTS</title>
<p>We thank the following people for discussion, comments, and ideas in preparing this response: Mary Beckman, Balthasar Bickel, Penelope Brown, Andy Butcher, Grev Corbett, Bill Croft, Nick Enfield, Adele Goldberg, Martin Haspelmath, Yan Huang, Larry Hyman, Rachel Nordlinger, Masha Polinsky, Arie Verhagen, and Robert Van Valin.</p>
</ack>
<fn-group>
<title>NOTES</title>
<fn id="en01" symbol="1.">
<label>1.</label>
<p>We use the term
<italic>generative linguists</italic>
to refer to linguists working specifically within frameworks deriving from the various theories of Chomsky. The term also has a wider sense, referring to a larger body of researchers working in fully explicit formal models of language such as LFG, HPSG, and their derivatives. These alternative theoretical developments have been much less wedded to the Chomskyan notion of Universal Grammar. LFG, in particular, has explicitly developed a much more flexible multidimensional architecture allowing for both constituency and dependency relations as well as the direct representation of prosodic units.</p>
</fn>
<fn id="en02" symbol="2.">
<label>2.</label>
<p>Of course these need to be relativized to modality: facts about the position of the larynx or the stability of some vowel formants across varying vocal tract configurations are only relevant to sound, whereas constraints on the production of hand or arm gestures are only relevant to manual sign. There will be some parallels, but the degree to which “sonority” is the same phenomenon in both, as
<bold>Berent</bold>
suggests, is still controversial (Sandler 2009; Sandler & Lillo-Martin
<xref ref-type="bibr" rid="ref52">2006</xref>
, p. 245).</p>
</fn>
<fn id="en03" symbol="3.">
<label>3.</label>
<p>Hockett (
<xref ref-type="bibr" rid="ref29">1960</xref>
) correctly identified this as part of the “duality of patterning” (together with combinatorial semantics) necessary if language is to be unlimited in its productivity.</p>
</fn>
<fn id="en04" symbol="4.">
<label>4.</label>
<p>Lest this finding invite incredulity, given that the language family is assumed to be less than 6,000 years old, this figure is worked out by summing independent path-lengths in many branches of the family tree and looking for the total numbers of changes from an ancestral language. The number should be taken with a pinch of salt but is probably in the right general ballpark.</p>
</fn>
<fn id="en05" symbol="5.">
<label>5.</label>
<p>Abui, on František Kratochvil's (
<xref ref-type="bibr" rid="ref36">2007</xref>
) analysis, comes rather close.</p>
</fn>
<fn id="en06" symbol="6.">
<label>6.</label>
<p>For other kinds of explanation in terms of processing costs, see Kluender (
<xref ref-type="bibr" rid="ref34">1992</xref>
;
<xref ref-type="bibr" rid="ref35">1998</xref>
), Hawkins (
<xref ref-type="bibr" rid="ref27">1999</xref>
), and Sag et al. (
<xref ref-type="bibr" rid="ref50">2007</xref>
).</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref>
<citation citation-type="other" id="ref1">
<name>
<surname>Anderson</surname>
<given-names>G. D. S</given-names>
</name>
. (
<year>2007</year>
)
<italic>The Munda verb. Typological perspectives.</italic>
Mouton de Gruyter.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref2">
<name>
<surname>Bateman</surname>
<given-names>J</given-names>
</name>
. (
<year>1986</year>
a)
<article-title>Tone morphemes and aspect in Iau</article-title>
.
<source>Nusa</source>
<volume>26</volume>
:
<fpage>1</fpage>
<lpage>50</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref3">
<name>
<surname>Bateman</surname>
<given-names>J</given-names>
</name>
. (
<year>1986</year>
b)
<article-title>Tone morphemes and status in Iau</article-title>
.
<source>Nusa</source>
<volume>26</volume>
:
<fpage>51</fpage>
<lpage>76</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref4">
<name>
<surname>Bateman</surname>
<given-names>J</given-names>
</name>
. (
<year>1990</year>
a)
<article-title>Iau segmental and tone phonology</article-title>
.
<source>Nusa</source>
<volume>32</volume>
:
<fpage>29</fpage>
<lpage>42</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref5">
<name>
<surname>Bateman</surname>
<given-names>J</given-names>
</name>
. (
<year>1990</year>
b)
<article-title>Pragmatic functions of the tone morphemes and illocutionary force particles in Iau</article-title>
.
<source>Nusa</source>
<volume>32</volume>
:
<fpage>1</fpage>
<lpage>28</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref6">
<name>
<surname>Bickel</surname>
<given-names>B</given-names>
</name>
. (
<year>2009</year>
) Typological patterns and hidden diversity. Plenary Talk, 8th Association for Linguistic Typology Conference,
<publisher-loc>Berkeley, CA</publisher-loc>
, July 24, 2009.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref7">
<name>
<surname>Blake</surname>
<given-names>B. J</given-names>
</name>
. (
<year>1979</year>
) Pitta-Pitta. In:
<source>Handbook of Australian languages, vol. 1</source>
, ed.
<name>
<surname>Dixon</surname>
<given-names>R. M. W.</given-names>
</name>
&
<name>
<surname>Blake</surname>
<given-names>B. J.</given-names>
</name>
, pp.
<fpage>182</fpage>
<lpage>242</lpage>
.
<publisher-name>Australian National University (ANU) Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref8">
<name>
<surname>Butcher</surname>
<given-names>A</given-names>
</name>
. (
<year>2006</year>
) Australian Aboriginal languages: Consonant-salient phonologies and the “place-of-articulation imperative”. In:
<source>Speech production: Models, phonetics processes and techniques</source>
, ed.
<name>
<surname>Harrington</surname>
<given-names>J. M.</given-names>
</name>
&
<name>
<surname>Tabain</surname>
<given-names>M.</given-names>
</name>
, pp.
<fpage>187</fpage>
<lpage>210</lpage>
.
<publisher-name>Psychology Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref9">
<name>
<surname>Butt</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Dalrymple</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Holloway King</surname>
<given-names>T.</given-names>
</name>
, eds. (
<year>2006</year>
)
<source>Intelligent linguistic architectures: Variations on themes by Ronald M. Kaplan</source>
.
<publisher-name>CSLI</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref10">
<name>
<surname>Chomsky</surname>
<given-names>N</given-names>
</name>
. (
<year>2007</year>
)
<article-title>Of minds and language</article-title>
.
<source>Biolinguistics</source>
<volume>1</volume>
:
<fpage>1009</fpage>
–27.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref11">
<name>
<surname>Christiansen</surname>
<given-names>M. H.</given-names>
</name>
,
<name>
<surname>Collins</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
, eds. (
<year>2009</year>
)
<source>Language universals</source>
.
<publisher-name>Oxford University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref12">
<name>
<surname>Darwin</surname>
<given-names>C</given-names>
</name>
. (
<year>1871</year>
)
<source>The descent of man and selection in relation to sex</source>
.
<publisher-name>John Murray</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref13">
<name>
<surname>Dryer</surname>
<given-names>M</given-names>
</name>
. (
<year>2009</year>
) On the order of demonstrative, numeral, adjective and noun: An alternative to Cinque. Public Lecture,
<publisher-name>Max Planck Institute for Evolutionary Anthropology, Department of Linguistics, and University of Leipzig, Institute of Linguistics</publisher-name>
, May 19, 2009.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref14">
<name>
<surname>Dunn</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Greenhill</surname>
<given-names>S. J.</given-names>
</name>
,
<name>
<surname>Levinson</surname>
<given-names>S. C.</given-names>
</name>
&
<name>
<surname>Gray</surname>
<given-names>R. D</given-names>
</name>
. (in preparation) Phylogenetic trees reveal lineage specific trends in the evolved structure of language.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref15">
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Christiansen</surname>
<given-names>M. H</given-names>
</name>
. (
<year>2003</year>
)
<article-title>How seriously should we take minimalist syntax?</article-title>
<source>Trends in Cognitive Sciences</source>
<volume>7.2</volume>
:
<fpage>60</fpage>
<lpage>61</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref16">
<name>
<surname>Evans</surname>
<given-names>N</given-names>
</name>
. (
<year>1995</year>
b) Current Issues in Australian phonology. In:
<source>Handbook of phonological theory</source>
, ed.
<name>
<surname>Goldsmith</surname>
<given-names>J.</given-names>
</name>
, pp.
<fpage>723</fpage>
–61.
<publisher-name>Blackwell</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref17">
<name>
<surname>Evans</surname>
<given-names>N</given-names>
</name>
. (
<year>2004</year>
) Experiencer objects in Iwaidjan languages. In:
<source>Non-nominative subjects, vol. 1</source>
, ed.
<name>
<surname>Peri</surname>
<given-names>B.</given-names>
</name>
&
<name>
<surname>Karumuri Venkata</surname>
<given-names>S.</given-names>
</name>
, pp.
<fpage>169</fpage>
–92.
<publisher-name>John Benjamins</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref18">
<name>
<surname>Fisher</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Scharff</surname>
<given-names>C</given-names>
</name>
. (
<year>2009</year>
)
<article-title>FOXP2 as a molecular window into speech and language</article-title>
.
<source>Trends in Genetics</source>
<volume>25</volume>
(
<issue>4</issue>
):
<fpage>166</fpage>
–77.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref19">
<name>
<surname>Fitch</surname>
<given-names>W. T</given-names>
</name>
. (
<year>2006</year>
)
<article-title>The biology and evolution of music: A comparative perspective</article-title>
.
<source>Cognition</source>
<volume>100</volume>
(
<issue>1</issue>
):
<fpage>173</fpage>
<lpage>215</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref20">
<name>
<surname>Gentner</surname>
<given-names>D</given-names>
</name>
. (
<year>2003</year>
) Why we are so smart. In:
<source>Language in mind</source>
, ed.
<name>
<surname>Gentner</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Goldin-Meadow</surname>
<given-names>S.</given-names>
</name>
, pp.
<fpage>195</fpage>
<lpage>236</lpage>
.
<publisher-name>MIT Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref21">
<name>
<surname>Goddard</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Wierzbicka</surname>
<given-names>A.</given-names>
</name>
, eds. (
<year>2002</year>
)
<source>Meaning and universal grammar – theory and empirical findings. 2 volumes</source>
.
<publisher-name>John Benjamins</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref22">
<name>
<surname>Hale</surname>
<given-names>K. L</given-names>
</name>
. (
<year>1982</year>
) The logic of Damin kinship terminology. In:
<source>Languages of kinship in Aboriginal Australia</source>
, ed.
<name>
<surname>Heath</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Merlan</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Rumsey</surname>
<given-names>A.</given-names>
</name>
, pp.
<fpage>31</fpage>
<lpage>37</lpage>
.
<publisher-name>Oceania Linguistic Monographs</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref23">
<name>
<surname>Haun</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Call</surname>
<given-names>J</given-names>
</name>
. (
<year>2009</year>
)
<article-title>Great apes' capacities to recognize relational similarity</article-title>
.
<source>Cognition</source>
<volume>110</volume>
:
<fpage>147</fpage>
–59.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref24">
<name>
<surname>Haviland</surname>
<given-names>J. B</given-names>
</name>
. (
<year>1988</year>
) “It's my own invention: A comparative grammatical sketch of colonial Tzotzil” and grammatical annotations. In:
<source>The great Tzotzil dictionary of Santo Domingo Zinacantán, with grammatical analysis and historical commentary</source>
, ed.
<name>
<surname>Laughlin</surname>
<given-names>R. M.</given-names>
</name>
&
<name>
<surname>Haviland</surname>
<given-names>J. B.</given-names>
</name>
, pp.
<fpage>79</fpage>
<lpage>121</lpage>
. (
<italic>Smithsonian Contributions to Anthropology</italic>
, No. 31).
<publisher-name>Smithsonian Institution Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref25">
<name>
<surname>Haviland</surname>
<given-names>J. B</given-names>
</name>
. (
<year>1994</year>
)
<article-title>“Te xa setel xulem” [The buzzards were circling]: Categories of verbal roots in (Zinacantec) Tzotzil</article-title>
.
<source>Linguistics</source>
<volume>32</volume>
:
<fpage>691</fpage>
<lpage>741</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref26">
<name>
<surname>Haviland</surname>
<given-names>J. B</given-names>
</name>
. (submitted) “White-blossomed on bended knee”: Linguistic mediations of nature and culture. Book chapter for
<italic>Festschrift for Terry Kaufman</italic>
, ed.
<name>
<surname>Zavala</surname>
<given-names>R. M.</given-names>
</name>
&
<name>
<surname>Smith-Stark</surname>
<given-names>T.</given-names>
</name>
. Available at:
<uri xlink:href="http://anthro.ucsd.edu/~jhaviland/Publications/BLOSSOMEdit.pfd">http://anthro.ucsd.edu/~jhaviland/Publications/BLOSSOMEdit.pfd</uri>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref27">
<name>
<surname>Hawkins</surname>
<given-names>J. A</given-names>
</name>
. (
<year>1999</year>
)
<article-title>Processing complexity and filler-gap dependencies across grammars</article-title>
.
<source>Language</source>
<volume>75</volume>
:
<fpage>244</fpage>
–85.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref28">
<name>
<surname>Hewitt</surname>
<given-names>B. G</given-names>
</name>
. (
<year>1979</year>
)
<article-title>Aspects of verbal affixation in Abkhaz (Abžui dialect)</article-title>
.
<source>Transactions of the Philological Society</source>
<volume>77</volume>
(
<issue>1</issue>
):
<fpage>211</fpage>
–38.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref29">
<name>
<surname>Hockett</surname>
<given-names>C. F</given-names>
</name>
. (
<year>1960</year>
)
<article-title>The origin of speech</article-title>
.
<source>Scientific American</source>
<volume>203</volume>
:
<fpage>89</fpage>
<lpage>96</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref30">
<name>
<surname>Huang</surname>
<given-names>C.-R</given-names>
</name>
. (
<year>1993</year>
) Reverse long-distance dependency and functional uncertainty: The interpretation of Mandarin questions. In:
<source>Language, information, and computing</source>
, ed.
<name>
<surname>Lee</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Kang</surname>
<given-names>B. M.</given-names>
</name>
, pp.
<fpage>111</fpage>
–20.
<publisher-name>Thaehaksa</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref31">
<name>
<surname>Hyman</surname>
<given-names>L</given-names>
</name>
. (
<year>2008</year>
)
<article-title>Universals in phonology?</article-title>
<source>The Linguistic Review</source>
<volume>25</volume>
:
<fpage>83</fpage>
<lpage>187</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref32">
<name>
<surname>Idsardi</surname>
<given-names>W. J</given-names>
</name>
. (
<year>2006</year>
)
<article-title>A simple proof that optimality theory is computationally intractable</article-title>
.
<source>Linguistic Inquiry</source>
<volume>37</volume>
:
<fpage>271</fpage>
–75.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref33">
<name>
<surname>Jacobsen</surname>
<given-names>W. H</given-names>
</name>
. (
<year>1979</year>
) Noun and verb in Nootkan. In:
<source>The Victoria Conference on northwestern languages, Victoria, British Columbia, November 4/5, 1976</source>
, ed.
<name>
<surname>Efrat</surname>
<given-names>B. S.</given-names>
</name>
, pp.
<fpage>83</fpage>
<lpage>155</lpage>
.
<publisher-name>British Columbia Provincial Museum</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref34">
<name>
<surname>Kluender</surname>
<given-names>R</given-names>
</name>
. (
<year>1992</year>
) Deriving island constraints from principles of predication. In:
<source>Island constraints</source>
, ed.
<name>
<surname>Goodluck</surname>
<given-names>H.</given-names>
</name>
&
<name>
<surname>Rochmont</surname>
<given-names>M.</given-names>
</name>
, pp.
<fpage>223</fpage>
–58.
<publisher-name>Kluwer</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref35">
<name>
<surname>Kluender</surname>
<given-names>R</given-names>
</name>
. (
<year>1998</year>
) On the distinction between strong and weak islands: A processing perspective. In:
<source>Syntax and semantics</source>
, ed.
<name>
<surname>Culicover</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>McNally</surname>
<given-names>L.</given-names>
</name>
, pp.
<fpage>241</fpage>
–79.
<publisher-name>Academic Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref36">
<name>
<surname>Kratochvíl</surname>
<given-names>F</given-names>
</name>
. (
<year>2007</year>
) A grammar of Abui. Doctoral dissertation,
<publisher-name>Leiden University</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref37">
<name>
<surname>Kruspe</surname>
<given-names>N</given-names>
</name>
. (
<year>2004</year>
)
<source>A grammar of Semelai</source>
.
<publisher-name>Cambridge University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref38">
<name>
<surname>Laland</surname>
<given-names>K. N.</given-names>
</name>
,
<name>
<surname>Odling-Smee</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Feldman</surname>
<given-names>M. W</given-names>
</name>
. (
<year>1999</year>
)
<article-title>Evolutionary consequences of niche construction and their implications for ecology</article-title>
.
<source>Proceedings of the National Academy of Sciences USA</source>
<volume>96</volume>
:
<fpage>10242</fpage>
–47.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref39">
<name>
<surname>Levinson</surname>
<given-names>S. C</given-names>
</name>
. (
<year>1987</year>
)
<article-title>Pragmatics and the grammar of anaphora</article-title>
.
<source>Journal of Linguistics</source>
<volume>23</volume>
:
<fpage>379</fpage>
<lpage>434</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref40">
<name>
<surname>Lieberman</surname>
<given-names>P</given-names>
</name>
. (
<year>2006</year>
)
<source>Toward an evolutionary biology of language</source>
.
<publisher-name>Belknap/Harvard</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref41">
<name>
<surname>Liljencrants</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Lindblom</surname>
<given-names>B</given-names>
</name>
. (
<year>1972</year>
)
<article-title>Numerical simulations of vowel quality systems: The role of perceptual contrast</article-title>
.
<source>Language</source>
<volume>48</volume>
:
<fpage>839</fpage>
–62.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref42">
<name>
<surname>McCarthy</surname>
<given-names>J. J</given-names>
</name>
. (
<year>2002</year>
)
<source>A thematic guide to optimality theory (Research Surveys in Linguistics)</source>
.
<publisher-name>Cambridge University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="thesis" id="ref43">
<name>
<surname>Molotsi</surname>
<given-names>K. J</given-names>
</name>
. (
<year>1993</year>
) The characteristics of Southern Sotho ideophones. Master's thesis,
<publisher-name>University of Stellenbosch</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref44">
<name>
<surname>Müller</surname>
<given-names>R.-A</given-names>
</name>
. (
<year>2009</year>
) Language universals in the brain: How linguistic are they? In:
<source>Language universals</source>
, ed.
<name>
<surname>Christiansen</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Collins</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
, pp.
<fpage>224</fpage>
–52.
<publisher-name>Oxford University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="other" id="ref45">
<name>
<surname>Mycock</surname>
<given-names>L</given-names>
</name>
. (
<year>2006</year>
) The typology of constituent questions: A lexical-functional grammar analysis of “Wh”-questions. Unpublished Dissertation.
<publisher-name>University of Manchester</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref46">
<name>
<surname>Papassotiropoulos</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Stephan</surname>
<given-names>D. A.</given-names>
</name>
,
<name>
<surname>Huentelman</surname>
<given-names>M. J.</given-names>
</name>
,
<name>
<surname>Hoerndli</surname>
<given-names>F. J.</given-names>
</name>
,
<name>
<surname>Craig</surname>
<given-names>D. W.</given-names>
</name>
,
<name>
<surname>Pearson</surname>
<given-names>J. V.</given-names>
</name>
,
<name>
<surname>Huynh</surname>
<given-names>K.-D.</given-names>
</name>
,
<name>
<surname>Brunner</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Corneveaux</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Osborne</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Wollmer</surname>
<given-names>M. A.</given-names>
</name>
,
<name>
<surname>Aerni</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Coluccia</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Hänggi</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Mondadori</surname>
<given-names>C. R. A.</given-names>
</name>
,
<name>
<surname>Buchmann</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Reiman</surname>
<given-names>E. M.</given-names>
</name>
,
<name>
<surname>Caselli</surname>
<given-names>R. J.</given-names>
</name>
,
<name>
<surname>Henke</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>de Quervain</surname>
<given-names>D. J.-F</given-names>
</name>
. (
<year>2006</year>
)
<article-title>Common Kibra alleles are associated with human memory performance</article-title>
.
<source>Science</source>
<volume>314</volume>
(
<issue>5798</issue>
):
<fpage>475</fpage>
–78.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref47">
<name>
<surname>Pawley</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Gi</surname>
<given-names>S. P.</given-names>
</name>
,
<name>
<surname>Majnep</surname>
<given-names>I. S.</given-names>
</name>
&
<name>
<surname>Kias</surname>
<given-names>J</given-names>
</name>
. (
<year>2000</year>
) Hunger acts on me: The grammar and semantics of bodily and mental process expressions in Kalam. In:
<source>Grammatical analysis: Morphology, syntax and semantics: Studies in honor of Stan Starosta</source>
, ed.
<name>
<surname>De Guzman</surname>
<given-names>V. P.</given-names>
</name>
&
<name>
<surname>Bender</surname>
<given-names>B. W.</given-names>
</name>
, pp.
<fpage>153</fpage>
–85.
<publisher-name>University of Hawaii Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref48">
<name>
<surname>Ramamurti</surname>
<given-names>G. V</given-names>
</name>
. (
<year>1931</year>
)
<source>A manual of the So:ra (or Savara) language</source>
.
<publisher-name>Government Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref49">
<name>
<surname>Reuer</surname>
<given-names>V</given-names>
</name>
. (
<year>2004</year>
)
<article-title>Book review of Falk, Yehuda N.,
<italic>Lexical-functional grammar – an introduction to parallel constraint-based syntax. Lecture Notes No. 126 (CSLI-LN).</italic>
Center for the Study of Language and Information, Stanford, 2001, xv+237 pages</article-title>
.
<source>Machine Translation</source>
<volume>18.4</volume>
:
<fpage>359</fpage>
–64.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref50">
<name>
<surname>Sag</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Hofmeister</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Snider</surname>
<given-names>N</given-names>
</name>
. (
<year>2007</year>
)
<article-title>Processing complexity in subjacency violations: The complex noun phrase constraint</article-title>
.
<source>Chicago Linguistics Society</source>
<volume>43</volume>
(
<issue>1</issue>
):
<fpage>219</fpage>
–29.</citation>
</ref>
<ref>
<citation citation-type="thesis" id="ref51">
<name>
<surname>Sandler</surname>
<given-names>W.</given-names>
</name>
,
<name>
<surname>Aronoff</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Meir</surname>
<given-names>I.</given-names>
</name>
&
<name>
<surname>Padden</surname>
<given-names>C</given-names>
</name>
. (
<year>2009</year>
) The gradual emergence of phonological form in a new language. Master's thesis,
<publisher-name>University of Haifa, State University of New York at Stony Brook, and University of California</publisher-name>
,
<publisher-loc>San Diego</publisher-loc>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref52">
<name>
<surname>Sandler</surname>
<given-names>W.</given-names>
</name>
&
<name>
<surname>Lillo-Martin</surname>
<given-names>D. C</given-names>
</name>
. (
<year>2006</year>
)
<source>Sign language and linguistic universals</source>
.
<publisher-name>Cambridge University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref53">
<name>
<surname>Schachter</surname>
<given-names>P</given-names>
</name>
. (
<year>1985</year>
) Parts-of-speech systems. In:
<source>Language typology and syntactic description, vol. 1: Clause structure</source>
, ed.
<name>
<surname>Shopen</surname>
<given-names>T.</given-names>
</name>
, pp.
<fpage>3</fpage>
<lpage>61</lpage>
.
<publisher-name>Cambridge University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref54">
<name>
<surname>Stivers</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Enfield</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Englert</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Hayashi</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Heinemann</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Hoymann</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Rossano</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>de Ruiter</surname>
<given-names>J. P.</given-names>
</name>
,
<name>
<surname>Yoon</surname>
<given-names>K.-E.</given-names>
</name>
&
<name>
<surname>Levinson</surname>
<given-names>S. C</given-names>
</name>
. (
<year>2009</year>
)
<article-title>Universals and cultural variation in turn taking in conversation</article-title>
.
<source>Proceedings of the National Academy of Sciences USA</source>
<volume>106</volume>
(
<issue>26</issue>
):
<fpage>10587</fpage>
–92.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref55">
<name>
<surname>Van Valin</surname>
<given-names>R. D</given-names>
</name>
. (
<year>1998</year>
) The acquisition of WH-questions and the mechanisms of language acquisition. In:
<source>The new psychology of language: Cognitive and functional approaches to language structure</source>
, ed.
<name>
<surname>Tomasello</surname>
<given-names>M.</given-names>
</name>
, pp.
<fpage>221</fpage>
–49.
<publisher-name>Erlbaum</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" id="ref56">
<name>
<surname>West-Eberhard</surname>
<given-names>M. J</given-names>
</name>
. (
<year>2003</year>
)
<source>Developmental plasticity and evolution</source>
.
<publisher-name>Oxford University Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" id="ref57">
<name>
<surname>Zuidema</surname>
<given-names>W.</given-names>
</name>
&
<name>
<surname>De Boer</surname>
<given-names>B</given-names>
</name>
. (
<year>2009</year>
)
<article-title>The evolution of combinatorial phonology</article-title>
.
<source>Journal of Phonetics</source>
<volume>37</volume>
(
<issue>2</issue>
):
<fpage>125</fpage>
–44.</citation>
</ref>
</ref-list>
</back>
</article>
</istex:document>
</istex:metadataXml>
<mods version="3.6">
<titleInfo>
<title>With diversity in mind: Freeing the language sciences from Universal Grammar</title>
</titleInfo>
<titleInfo type="alternative">
<title>Response/Evans & Levinson: The myth of language universals</title>
</titleInfo>
<titleInfo type="alternative" contentType="CDATA">
<title>With diversity in mind: Freeing the language sciences from Universal Grammar</title>
</titleInfo>
<name type="personal">
<namePart type="given">Nicholas</namePart>
<namePart type="family">Evans</namePart>
<affiliation>Department of Linguistics, Research School of Asian and Pacific Studies, Australian National University, ACT 0200, Australia nicholas.evans@anu.edu.au http://rspas.anu.edu.au/people/personal/evann_ling.php</affiliation>
<affiliation>E-mail: nicholas.evans@anu.edu.au</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Stephen C.</namePart>
<namePart type="family">Levinson</namePart>
<affiliation>Max Planck Institute for Psycholinguistics, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands; and Radboud University, The Netherlands. stephen.levinson@mpi.nl http://www.mpi.nl/Members/StephenLevinson</affiliation>
<affiliation>E-mail: stephen.levinson@mpi.nl</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<typeOfResource>text</typeOfResource>
<genre type="research-article" displayLabel="research-article" authority="ISTEX" authorityURI="https://content-type.data.istex.fr" valueURI="https://content-type.data.istex.fr/ark:/67375/XTP-1JC4F85T-7">research-article</genre>
<originInfo>
<publisher>Cambridge University Press</publisher>
<place>
<placeTerm type="text">New York, USA</placeTerm>
</place>
<dateIssued encoding="w3cdtf">2009-10</dateIssued>
<copyrightDate encoding="w3cdtf">2009</copyrightDate>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
<languageTerm type="code" authority="rfc3066">en</languageTerm>
</language>
<abstract type="normal">Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intention-attributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.</abstract>
<relatedItem type="host">
<titleInfo>
<title>Behavioral and Brain Sciences</title>
</titleInfo>
<titleInfo type="abbreviated">
<title>Behav Brain Sci</title>
</titleInfo>
<genre type="journal" authority="ISTEX" authorityURI="https://publication-type.data.istex.fr" valueURI="https://publication-type.data.istex.fr/ark:/67375/JMC-0GLKJH51-B">journal</genre>
<identifier type="ISSN">0140-525X</identifier>
<identifier type="eISSN">1469-1825</identifier>
<identifier type="PublisherID">BBS</identifier>
<part>
<date>2009</date>
<detail type="volume">
<caption>vol.</caption>
<number>32</number>
</detail>
<detail type="issue">
<caption>no.</caption>
<number>5</number>
</detail>
<extent unit="pages">
<start>472</start>
<end>492</end>
<total>23</total>
</extent>
</part>
</relatedItem>
<relatedItem type="reviewOf">
<identifier type="commentary-article"></identifier>
<part>
<detail type="volume">
<caption>vol.</caption>
<number>32</number>
</detail>
<extent unit="pages">
<start>429</start>
</extent>
</part>
</relatedItem>
<identifier type="istex">2B00B7FA5F889F68E543BBA70D94C693F32021E4</identifier>
<identifier type="ark">ark:/67375/6GQ-458MCHXL-T</identifier>
<identifier type="DOI">10.1017/S0140525X09990525</identifier>
<identifier type="PII">S0140525X09990525</identifier>
<identifier type="ArticleID">99052</identifier>
<identifier type="related-article-ID">S0140525X0999094X</identifier>
<accessCondition type="use and reproduction" contentType="copyright">Copyright © Cambridge University Press 2009</accessCondition>
<recordInfo>
<recordContentSource authority="ISTEX" authorityURI="https://loaded-corpus.data.istex.fr" valueURI="https://loaded-corpus.data.istex.fr/ark:/67375/XBH-G3RCRD03-V">cambridge</recordContentSource>
<recordOrigin>Copyright © Cambridge University Press 2009</recordOrigin>
</recordInfo>
</mods>
<json:item>
<extension>json</extension>
<original>false</original>
<mimetype>application/json</mimetype>
<uri>https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/metadata/json</uri>
</json:item>
</metadata>
<annexes>
<json:item>
<extension>gif</extension>
<original>true</original>
<mimetype>image/gif</mimetype>
<uri>https://api.istex.fr/document/2B00B7FA5F889F68E543BBA70D94C693F32021E4/annexes/gif</uri>
</json:item>
</annexes>
<serie></serie>
</istex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/MusiqueCeltiqueV1/Data/Istex/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B16 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Istex/Corpus/biblio.hfd -nk 001B16 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    MusiqueCeltiqueV1
   |flux=    Istex
   |étape=   Corpus
   |type=    RBID
   |clé=     ISTEX:2B00B7FA5F889F68E543BBA70D94C693F32021E4
   |texte=   With diversity in mind: Freeing the language sciences from Universal Grammar
}}

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Sat May 29 22:04:25 2021. Site generation: Sat May 29 22:08:31 2021