How Language Evolves

Friday, February 20, 2015

Abstracts

(no abstract)

(no abstract)

The emergence of language has been called the most recent major transition in the evolution of life on earth. It gives our species the remarkable ability to accurately communicate entirely novel meanings using sequences of behaviors that are themselves entirely novel. Language enables this feat by virtue of a small set of structural design features that are rare or absent elsewhere in nature. How can we study the evolution of these design features of human language, and hence understand our species-defining characteristic? In this talk, I will show how recent research in evolutionary linguistics has turned to the experiment lab to answer this question. By realizing that cultural as well as biological evolution has a central role to play in the origins of language, we have unlocked a method that allows us to observe the evolutionary emergence of language structure in miniature cultures that we create in the lab.

Contact languages represent some of the ways that new languages can be created, as they systematically combine elements from more than one existing language, resulting in novel linguistic systems. When multiple sources provide input to a rapidly emerging new system, elements are likely to be reanalyzed, and new structural categories may be created that differ from those in the source languages. For example, Light Warlpiri, a newly emerged mixed language spoken in north Australia, combines Warlpiri nominal structure with verbal structure from varieties of English and/or Kriol (an English-lexified creole), but with the addition of radical restructuring of the verbal auxiliary system. In this talk, I will give examples of restructuring in contact languages, including Light Warlpiri.

The emergence of a new sign language in Nicaragua provides a real-world opportunity to discover the relationship between individual language development and language creation. Nicaraguan Sign Language (NSL) is a young, urban sign language that emerged from within a community of deaf children initially brought together in an educational setting in the 1970s. Members of different age cohorts today represent a living “fossil record” of the language as it developed over the following four decades. In this talk, I will trace the development of basic sentence structure and vocabulary in NSL, in order to uncover the effect of language acquisition processes on language emergence and convergence across age cohorts. Evolutionary principles must apply not only to the development of humans as language learners, but also to the development of languages as systems that change and adapt over generations.

About 20 new and young sign languages from around the world have been reported in the research literature. They have drawn interest because they provide a unique opportunity not available in spoken languages to study the spontaneous emergence of language within 1 to 3 generations. My research lab studies sign languages ranging from those that are new to those that are more established, having records of use dating from 200 or more years and now have primary users in the hundreds of thousands. In this talk, I focus on the emergence of words and lexical categories in new sign languages. Using naming experiments with groups of non-signing gesturers and signers of new languages, we show that all groups consistently distinguish between names and actions, and across semantic categories such as tool, natural objects and animates, we show that these preferences in gesture become amplified and differentiated in new and then more prominently, in established sign languages. We show that emerging lexical distinctions are both cognitive and communicative in nature. They constitute common categories found in languages because they reflect the shared ways that humans interact with the world, involving self, other and mediating tools. Our goal is to explain the fundamental expressive capabilities of humans that become realized in the myriad different languages of the world.

In human languages, spoken and signed, words or signs are products of combinatorial systems that combine meaningless smaller units in different ways to yield different words or signs with different meanings. In spoken languages those smaller units are the sounds of speech (phonemes). In sign languages, they are handshapes, movements, and the places on the body where signs are made. The question is how the combinatorial systems that combine them evolve.

Word-internal combinatoriality evolved in spoken languages too long ago to be traced, but in sign languages that evolution is much more recent. We argue that signs originated as holistic gestures and show how the life cycle of one sign in American Sign Language (ASL) mirrors the evolution from iconic gestures to products of a combinatorial system, tracing the intervening stages in some detail.

We show that grouping the smaller units together based on similarities in how they are made limits how much signs can vary from one signer to another. Formerly iconic gestures have evolved into signs governed by formal constraints that can obliterate a sign’s original iconic basis.

We briefly present evidence that chimpanzees exposed to ASL for years learned only a small number of holistic gestures, not the combinatorial system learned by signers of ASL. This is explained if the combinatorial abilities needed to learn the vocabulary of a human language evolved in humans after the human and chimpanzee lineages diverged.

We conclude by suggesting that the evolutionary path proposed here for signs of iconic origin could provide an appropriate working model of the parallel evolution of non-iconic signs in sign languages and of the spoken words of spoken languages.
 

A language can be thought of as a mapping between sound (or, in sign languages, gesture) and meaning (or concepts). In a developed language such as English, this mapping makes use of syntactic structure, in which words are categorized by parts of speech and words are combined in phrases such as Noun Phrase. Furthermore, a phrase can be made up of smaller phrases, so the structure can be highly hierarchical.

This talk will explore forms of language with a much more limited organization, linear grammar. Such languages (largely) lack the familiar manifestations of syntactic structure, but they still manage to map between sound and meaning. Languages with linear grammar include early stages of child language, stages in acquisition of second languages by adults, pidgins, “home signs” (the sign systems invented by deaf children with no sign language input), and “village signs” spoken in isolated communities with hereditary deafness. Linear grammars also are sufficient to describe (most aspects of) some “full” languages such as Riau Indonesian and Pirahã. Moreover, it appears that linear grammar is utilized by speakers of “developed” languages, mostly below the radar, but revealing itself under conditions of stress or brain damage. Finally, linear grammar is a plausible steppingstone in the evolution of the language faculty – an intermediate stage between primate call systems and modern human language.

 

The human language system evolved against the backdrop of other, evolutionarily older systems. How does the language system fit with the rest of our mind and brain? Does it rely on specialized mechanisms, or does it instead make use of machinery that we use to perform other complex tasks? Using data from brain imaging investigations and studies of patients with brain damage, I will argue that a set of brain regions in the adult human brain is specialized for high-level language processing. When probed with functional MRI, these regions – in the frontal and temporal lobes of the left hemisphere – respond robustly during language comprehension and production, but show little or no response when we engage in arithmetic processing, hold information in working memory, inhibit irrelevant information, listen to music, or perceive meaningful non-linguistic representations. Consistent with these functionally selective responses, damage to the language system leaves most non-linguistic cognitive abilities largely intact. I argue that this fronto-temporal network emerges over the course of development as we acquire language knowledge. In the adult brain, this system stores our linguistic knowledge representations and uses these representations to interpret and generate new utterances. Ongoing work aims to characterize i) the precise computations that the regions of the language system perform, as well as ii) this system’s interactions with other large-scale brain networks, needed to achieve uniquely human cognition.

Sign language is similar to spoken language in fundamental ways: their linguistic structure is similar, they are produced and comprehended in similar ways, and the language regions of the brain’s left hemisphere are responsible for language in both modalities. However, in a series of studies we have found that these statements are true only if the child experiences language used in the environment from birth. Because deaf children cannot hear the language spoken around them, and cannot see sign language when it is absent from the environment, they often experience language for the first time at older ages well past infancy when their hearing peers have already mastered language.

Using this unique situation, we have discovered that linguistic stimulation during early life is necessary for the human language capacity to develop fully. The longer the child matures without language, the more atypical linguistic functioning and brain language processing become in adulthood. Thus, the universal human ability to learn language and the ability of the traditional language regions of the brain to process language crucially depend upon the timing of linguistic experience in early human development.

 

A unique and defining trait of human behavior is our ability to communicate through speech. Our laboratory is interested in determining the basic mechanisms that underlie our ability to perceive and produce speech. While much of this processing has been localized to the peri-sylvian cortex, including Broca’s and Wernicke’s areas, the fundamental organizational principles of the neural circuits within these areas are completely unknown.

To address this, our laboratory applies a variety of experimental approaches to examine both local circuitry and global network dynamics spanning multiple cortical and sub-cortical regions with unparalleled spatial and temporal resolution in humans.

Our central goal is to provide a mechanistic account for the major properties of speech behavior in normal speakers and those with language disorders. Our ongoing research is not only deepening understanding of speech and its disorders, but also is leading directly to safer mapping methods to preserve language function during brain surgery.