This article appeared in Scientific American June 2001, and was reprinted in 2002.
A full PDF version of the article is here

 

SIGN LANGUAGE in the BRAIN

by Gregory Hickok, Ursula Bellugi and Edward S. Klima

How does the human brain process language?
New studies of deaf signers hint at an answer

ONE OF THE GREAT MYSTERIES of the human brain is how it understands and produces language. Until recently, most of the research on this subject had been based on the study of spoken languages: English, French, German and the like. Starting in the mid-19th century, scientists made large strides in identifying the regions of the brain involved in speech. For example, in 1861 French neurologist Paul Broca discovered that patients who could understand spoken language but had difficulty speaking tended to have damage to a part of the brain's left hemisphere that became known as Broca's area. And in 1874 German physician Carl Wernicke found that patients with fluent speech but severe comprehension problems typically had damage to another part of the left hemisphere, which was dubbed Wernicke's area.

Similar damage to the brain's right hemisphere only very rarely results in such language disruptions, which are called aphasias. Instead right hemisphere damage is more often associated with severe visual-spatial problems, such as the inability to copy a simple line drawing. For these reasons, the left hemisphere is often branded the verbal hemisphere and the right hemisphere the spatial hemisphere. Although this dichotomy is an oversimplification, it does capture some of the main clinical differences between individuals with damage to the left side of the brain and those with damage to the right. But many puzzles remain. One that has been particularly hard to crack is why language sets up shop where it does. The locations of Wernicke's and Broca's areas seem to make sense: Wernicke's area, involved in speech comprehension, is located near the auditory cortex, the part of the brain that receives signals from the ears. Broca's area, involved in speech production, is located next to the part of the motor cortex that controls the muscles of the mouth and lips [see illustration in article]. But is the brain's organization for language truly based on the functions of hearing and speaking?

One way to explore this question is to study a language that uses different sensory and motor channels. Reading and writing, of course, employ vision for comprehension and hand movements for expression, but for most people these activities depend, at least in part, on brain systems involved in the use of a spoken language. The sign languages of the deaf, however, precisely fit the bill. Over the past two decades, we have examined groups of deaf signers who have suffered damage to either the right or the left hemisphere of their brains, mostly as a result of strokes. By evaluating their proficiency at understanding and producing signs, we set out to determine whether the brain regions that interpret and generate sign language are the same ones involved in spoken language. The surprising results have illuminated the workings of the human brain and may help neurologists treat the ills of their deaf patients.

The Signs of Language

MANY PEOPLE MISTAKENLY BELIEVE that sign language is just a loose collection of pantomime-like gestures thrown together willy-nilly to allow rudimentary communication. But in truth, sign languages are highly structured linguistic systems with all the grammatical complexity of spoken languages. Just as English and Italian have elaborate rules for forming words and sentences, sign languages have rules for individual signs and signed sentences. Contrary to another common misconception, there is no universal sign language. Deaf people in different countries use very different sign languages. In fact, a deaf signer who acquires a second sign language as an adult will actually sign with a foreign accent! Moreover, sign languages are not simply manual versions of the spoken languages that are used in their surrounding communities. American Sign Language (ASL) and British Sign Language, for example, are mutually incomprehensible.

Sign and spoken languages share the abstract properties of language but differ radically in their outward form. Spoken languages are encoded in acoustic-temporal changes -- variations in sound over time. Sign languages, however, rely on visual-spatial changes to signal linguistic contrasts [see box in article]. How does this difference in form affect the neural organization of language? One might hypothesize that sign language would be supported by systems in the brain's right hemisphere because signs are visual-spatial signals. Accordingly, one could contend that the sign-language analogue of Wernicke's area in deaf signers would be near the brain regions associated with visual processing and that the analogue of Broca's area would be near the motor cortex controlling hand and arm movements. When we began to test this hypothesis in the 1980s, two fundamental questions needed to be answered: Did deaf signers with brain damage have sign-language deficits? And if so, did the deficits resemble either Wernicke's aphasia (comprehension problems and error-prone speech) or Broca's aphasia (good comprehension but difficulty in producing fluent speech)? The answer to both questions was a resounding yes. One of the first patients studied by our group signed fluently, using all the proper grammatical markers of ASL, but the message conveyed by his signing was often incoherent. An English gloss of one of his utterances reads:

And there's one (way down at the end) [unintelligible].
The man walked over to see the (disconnected), an extension of the (earth) room.
It's there for the man (can live) a roof and light with shades to (keep pulling down).

The patient's disorganized signing and apparent lack of comprehension of others' signs were very similar to the symptoms of hearing patients with Wernicke's aphasia. Another deaf patient we studied early in the research program had extreme difficulty producing signs. She had to struggle to shape and orient her hands to perform the proper movement for virtually every sign she attempted. Most of her utterances were limited to isolated signs. This was not merely a motor control problem: when asked to copy line drawings of objects such as an elephant or a flower, she did so accurately. Also, in contrast to her severe sign-language production problems, her comprehension of sign language was excellent. This profile of language abilities parallels the symptoms of Broca's aphasia.

But where was the brain damage that caused these sign aphasias? The answer was surprising. Both patients had lesions in their left hemispheres. And the lesions were located just about where you'd expect to find them in hearing patients with similar problems. The deaf signer with comprehension difficulties had damage that included Wernicke's area, whereas the patient who had trouble making signs had damage that involved Broca's area.

These observations showed that the left hemisphere plays a crucial role in supporting sign language. But what about the right hemisphere? One would think that damage to the right hemisphere, which appears to be critically involved in many visual- spatial functions, would have a devastating effect on sign language ability as well. But this assumption is apparently wrong. Signers with damage to the right hemisphere were fluent and accurate in their production of signs, used normal grammar and comprehended signs with ease. This held true even in patients whose nonlinguistic visual-spatial abilities had been severely compromised by their brain damage. One signer with damage to the right hemisphere, for example, could not create or copy recognizable drawings and failed to notice objects in the left part of his visual field (a condition known as hemispatial neglect). Yet he could communicate very efficiently in sign language.

Subsequent research using larger groups of deaf signers con- firmed the early case studies. A study published by our team in 1996 compared the sign-language abilities of 13 left hemisphere-damaged (LHD) signers with those of 10 right hemisphere -damaged (RHD) signers. As a group, the LHD signers performed poorly across a wide range of sign-language measures: They had trouble comprehending isolated signs and signed sentences and were likely to have problems with fluency as well. They also had difficulty with picture-naming tasks and frequently made paraphasic errors -- slips of the hand -- in which they inadvertently substituted one sign for another or one component of a sign, such as hand shape, for another. In contrast, the RHD signers performed well on all these tasks. The study also showed that difficulties with sign-language fluency were not caused by more general problems in controlling voluntary hand or arm movements: patients who had trouble making signs were often capable of producing nonmeaningful hand and arm gestures.

We obtained similar results in another study, this one focusing on sign-language comprehension in 19 lifelong signers with brain lesions, 11 with damage to the left hemisphere and eight with damage to the right. The LHD group performed significantly worse than the RHD group on three tests that evaluated their understanding of single signs, simple sentences and complex sentences. The most impaired signers were those with damage to the brain's left temporal lobe, where Wernicke's area is located.

Taken together, these findings suggest that the brain's left hemisphere is dominant for sign language, just as it is for speech. The organization of the brain for language does not appear to be particularly affected by the way in which language is perceived and produced.

The Story Gets Complicated

AS WE NOTED at the beginning of this article, the assumed left-right dichotomy of the brain -- with verbal abilities concentrated in the left hemisphere and visual-spatial abilities clustered in the right -- is an oversimplification. Research over the past few decades has shown that most cognitive abilities can be divided into multiple processing steps. At some levels, brain activity may be lateralized (taking place primarily in one hemisphere), whereas at others the activity may be bilateral (occurring in both).

Language ability, for instance, has many components. A hearing person must be able to perceive and produce individual speech sounds and the words they make up; otherwise, one could not distinguish "cup" from "pup." In addition, one must be able to recognize morphological additions ("walking" vs. "walked"), syntactic constructions ("the dog chased the cat" vs. "the dog was chased by the cat"), and melodic intonations ("the White House" vs. "the white house"). Finally, to conduct an extended discourse one must be able to establish and maintain a coherent connection between characters and events over the course of many sentences.

Of all these aspects of linguistic ability, the production of language is the one most sharply restricted to the brain's left hemisphere. Damage to the left hemisphere often interferes with the ability to select and assemble appropriate sounds and words when speaking. Right hemisphere damage rarely does. One exception to the left hemisphere's monopoly on language production is the creation of a coherent discourse. Patients with right hemisphere damage may be able to construct words and sentences quite well, but they frequently ramble from one subject to the next with only a loose thread of a connection between topics.

The perception and comprehension of language appear to be less confined to the left hemisphere than language production is. Both hemispheres are capable of distinguishing individual speech sounds, and the right hemisphere seems to have a role in the comprehension of extended discourse. But deciphering the meaning of words and sentences seems to take place primarily in the left hemisphere. This may explain why language was originally considered to be the exclusive province of the left hemisphere: the most common tests for aphasia evaluated the comprehension and production of words and sentences, not longer discourses.

Nonlinguistic spatial abilities can also be broken down into components with differing patterns of lateralization. Although the most severe impairments of spatial abilities occur more commonly following damage to the right hemisphere (both in deaf and hearing populations), researchers have observed some visual-spatial deficits in LHD hearing people. The symptoms typically involve difficulties in perceiving or reproducing the local- level features of a visual stimulus -- such as the details in a drawing -- even though the LHD patients can correctly identify or reproduce the drawing's overall configuration. RHD hearing people tend to show the opposite pattern. Thus, it has been suggested that the left hemisphere is important for local-level spatial perception and manipulation, whereas the right hemisphere is important for global-level processes.

This more sophisticated picture of the brain raises an interesting question: Is the division of visual-spatial abilities between the two hemispheres -- local level in the left, global level in the right -- related to the division of sign-language abilities? Individual signs and signed sentences can be thought of as pieces of the language, whereas an extended discourse can represent how those pieces are put together. Perhaps the left hemisphere is dominant for producing and comprehending signs and signed sentences because those processes are dependent on local-level spatial abilities. And perhaps the right hemisphere is dominant for establishing and maintaining a coherent discourse in sign language because those processes are dependent on global-level spatial abilities.

We set out to test this hypothesis. Our research confirmed that many RHD signers have trouble with extended discourse: their narratives are full of tangential utterances and even confabulations -- just the kind of difficulties that hearing RHD patients often have. But some RHD signers face another type of problem. Discourse in sign language has a unique spatial organization: when telling a story with many characters, the signer identifies each one using a different location. The space in front of the signer becomes a sort of virtual stage on which each character has his or her own spot. Our studies found that some RHD signers were able to stay with a topic in their discourse but failed to maintain a consistent spatial framework for the characters in their narratives. Is either of these types of discourse problems in RHD deaf signers causally connected to deficits in their nonlinguistic spatial abilities? It would appear not. We studied one RHD signer whose spatial abilities were severely impaired yet who had no trouble signing a coherent story. Another RHD patient had only mild visual-spatial problems yet could not sustain a proper spatial framework for the characters in the narrative. Clearly, the cognitive systems in the right hemisphere that support nonlinguistic spatial abilities are different from the ones that support extended discourse.

What about deaf signers with damage to the left hemisphere? Are their sign-language aphasias caused by impairments in local-level spatial abilities? To address this issue, we asked a group of deaf signers to reproduce line drawings and hierarchical figures, which have recognizable local and global features. (An example would be the letter "D" fashioned out of a constellation of small "y"s.) Just like hearing patients with left hemisphere damage, the LHD deaf subjects tended to reproduce the global configuration of the drawings correctly but often left out some of the details. (The RHD deaf subjects exhibited the reverse pattern, drawing pictures with lots of detail but a disorganized whole.) We found no correlation between the severity of the local-level spatial deficits in the LHD subjects and the severity of their sign-language aphasias. Contrary to all expectations, the sign-language abilities of lifelong deaf signers appear to be independent of their nonlinguistic spatial skills. It is possible that we have missed some fine distinctions in the organization of the brain for language in hearing patients and signers. Studies of patients with brain lesions are limited in their precision: to ascertain exactly which parts of the brain are involved in sign language, researchers would need to examine dozens of deaf signers with lesions in just the right places, and it would take decades to find them all. But the introduction of noninvasive brain imaging techniques -- functional magnetic resonance imaging (fMRI) and positron-emission tomography (PET) -- has given scientists new tools for probing the neural roots of language.

Researchers have employed these techniques to investigate the role of Broca's area in speech and sign production. Imaging results have shown that Broca's area is indeed activated in hearing patients when they are speaking and in deaf patients when they are signing. Brain imaging has also confirmed that the regions that play a role in sign-language comprehension are much the same as those involved in the understanding of spoken language. In one recent study, researchers used fMRI methods to observe the brain activity of lifelong deaf signers who were watching videotapes of sentences in ASL. The investigators found regions of activity in several parts of the left temporal lobe, including parts of Wernicke's area, and in several regions of the left frontal lobe, including Broca's area.

The study also found regions of activity in the right temporal lobe and right frontal lobe. This result has led some researchers to suggest that sign-language comprehension may be more bilaterally organized than spoken-language comprehension. But bilateral activity has also been detected in studies of hearing subjects listening to speech. More research is needed to clarify the role of the right hemisphere in sign-language processing. In any case, the studies of brain lesions make it clear that if differences exist between spoken and sign language, they are likely to be subtle and language specific.

Lessons from Sign Language

SIGN LANGUAGE involves both linguistic and visual-spatial processing -- two abilities that are supported by largely distinct neural systems in hearing individuals. But contrary to all expectations, the neural organization of sign language has more in common with that of spoken language than it does with the brain organization for visual-spatial processing. Why should this be the case?

The answer suggested by our line of research, as well as the work of others, is that the brain is a highly modular organ, with each module organized around a particular computational task. According to this view, the processing of visual-spatial information is not confined to a single region of the brain. Instead different neural modules process visual inputs in different ways. For example, visual inputs that carry linguistic information would be translated into a format optimized for linguistic processing, allowing the brain to access the meanings of signs, extract grammatical relations, and so on. But visual stimuli that carry a different kind of information -- such as the features and contours of a drawing -- would be translated into a format that is optimized for, say, carrying out motor commands to reproduce that drawing. The computational demands of these two kinds of processing tasks are very different, and thus different neural systems are involved.

Viewed in this way, it is not so surprising that comprehending and producing sign language appear to be completely independent of visual-spatial abilities such as copying a drawing. Although they both involve visual inputs and manual outputs, the tasks are different in fundamental ways. Consequently, we would expect them to share brain systems to some extent at the peripheral levels of processing -- for instance, at the primary visual cortex that receives signals from the optic nerve -- but to diverge in more central, higher-level brain systems. The situation with spoken and sign languages is just the opposite. These two systems differ radically in their inputs and outputs but appear to involve very similar linguistic computations. We therefore expect that spoken and sign languages will share a great deal of neural territory at the more central, higher- level brain systems but diverge at the more peripheral levels of processing. At the sensory end, for example, the peripheral processing of speech occurs in the auditory cortices in both hemispheres, whereas the initial processing of signs takes place in the visual cortex. But after the first stages of processing, the signals appear to be routed to central linguistic systems that have a common neural organization in speakers and signers. These findings may prove useful to neurologists treating deaf signers who have suffered strokes. The prognosis for the recovery of the signers' language abilities will most likely be similar to that of hearing patients with the same brain damage. Furthermore, when neurosurgeons remove brain tumors from deaf signers, they must take the same precautions to avoid damaging the language centers as they do with hearing patients.

A major challenge for future research will be to determine where the peripheral processing stages leave off and the central stages begin (or even if there is such a sharp boundary between the two). More study is also needed to understand the nature of the computations carried out at the various levels of linguistic processing. The similarities and differences between spoken and sign languages are ideally suited to answering these questions.

 


MORE TO EXPLORE
The Signs of Language. Edward S. Klima and Ursula Bellugi. Harvard University Press, 1979. Reprinted 1988.
What the Hands Reveal about the Brain. H. Poizner, Edward S. Klima and Ursula Bellugi. MIT Press, 1987. Reprinted by Bradford Books, 1990.
The Neural Organization of Language: Evidence from Sign Language Aphasia. G. Hickok, U. Bellugi and E. S. Klima in Trends in Cognitive Sciences, Vol. 2, No. 4, pages 129-136; April 1998.
Language, Cognition and the Brain. Karen Emmorey. Lawrence Erlbaum Associates, 2001.
The Signs of Aphasia. G. Hickok and U. Bellugi in Handbook of Neuropsychology, Vol. 3. Second edition. Edited by R. S. Berndt. Elsevier, 2001.

 

THE AUTHORS
GREGORY HICKOK, URSULA BELLUGI and EDWARD S. KLIMA have worked together on sign language aphasia for a decade. Hickok is associate professor in the department of cognitive sciences at the University of California, Irvine, and director of the Laboratory for Cognitive Brain Research, where he studies the functional anatomy of language. Bellugi is director of the Salk Institute's Laboratory for Cognitive Neuroscience in La Jolla, Calif., where she conducts research on language and its biological foundations. Much of her research is in collaboration with Klima, who is professor emeritus at the University of California, San Diego, and a co-director of Salk.