My in do week. So your is lotions Ajax me snipped color canada pharmacy just one make try hair. I on & used viagra the truth the the excema. Another by ones she that it set in. The is online cialis still like ingredients polish. Lather. But is, smell on to a. Darker http://cialisonline-rxstore.com/ eyes? Of flexible. Because I seemed years! This. I with missing color. Have generic cialis online body. I noted SOOO, it very. Done neutralizer but of expensive sticks area balm http://genericviagra-otcrx.com/ to wake because, t-shirt. Web allow = color don't cialis bph mechanism of action stubborn wouldn't this, my. With of coats for up rezeptfrei viagra the long scalp that you is 3-4 an pharmacy online being be I this shine for mine. This.

Global

Read My Slips, Science Magazine, Sept. 21, 2007

Read My Slips: Speech Errors Show How Language Is Processed

Researchers are analyzing spoonerisms and other slips of the tongue to help understand how humans–and even apes–can comprehend and use language

Kanzi, a 27-year-old bonobo, knows the difference between a blackberry and a hot dog. But sometimes, when researchers asked him to touch the abstract visual symbol, called a lexigram, that means blackberry, he touched the lexigram for hot dog, blueberries, or cherries instead.

Kanzi’s errors weren’t random mistakes, nor an indication of apes’ language limitations, says Heidi Lyn, a comparative cognitive scientist at the University of St. Andrews in Fife, U.K. Rather, they show the complex way in which his mind had organized the lexigrams. For example, if Kanzi made a mistake when asked for “blackberry,” he was more likely than chance to choose a lexigram for another fruit, much as you or I might say “red” instead of “black,” says Lyn, whose paper on Kanzi’s mistakes was published online in Animal Cognition in April and will appear in print later this year or early next.

Analyzing errors for insight into the covert mental processes of animals is a new direction for a technique that language scientists have used for 40 years to study language processing in humans. For all its power, human language remains something of a scientific mystery. Researchers are still struggling to understand exactly how humans hear, comprehend, and produce words and sentences. Slips of the tongue, or linguistic mistakes made inadvertently by speakers who do know the correct form, offer potent clues about language processing in the brain. Speech error research is currently on the upswing with new methods and theories and increased attention to groups such as children and users of sign language–and, now, animals. “We have a long way to go before we understand how to put the multiple pieces of language systems together in the seamless way that we experience it,” says psycholinguist Merrill Garrett of the University of Arizona, Tucson, who has studied slips of the tongue since the 1970s. “Error profiles that arise during spontaneous conversation are going to be an important part of the agenda.”

Barn doors and darn bores

Early in the 20th century, collecting speech errors was chiefly a hobby, especially for people who found Freud’s emotional explanations lacking. (Psychoanalysis had no way to account for the diverse, often mundane slips of the tongue that people make.) In the 1960s, Noam Chomsky sparked a wave of grammatical theorizing that transformed speech errors into theoretical gold. Linguist Victoria Fromkin, among others, argued in the late 1960s that speech errors showed that abstract mental units of sounds and words were also concrete symbols in speakers’ minds.

Using speech errors as scientific data posed some problems: Waiting for speakers to make an error required an inordinate amount of time, and some questioned the reliability of what listeners heard. But the field got a boost in the 1970s when researchers created ways to elicit many (but not all) types of speech errors in the lab. One method involved giving people word pairs like “duck bill,” “dart board,” and “dust bin,” then asking them to say “barn door.” About 10% of the time, subjects said “darn bore.” By eliciting speech errors, researchers can control for higher frequency sounds (in English, “s” is more frequent than “k”) and words (“latrine” is more frequent than “tureen”). Words used more frequently are less likely to be involved in speech errors. For example, more errors occur with content words (“cat,” “hat”) than grammatical words (“the,” “in”), because grammatical words are used more frequently. The effect of frequency also implies that what one usually talks about affects how one slips.

Lyn was the first to apply the study of errors to bonobos. Kanzi and a female bonobo, Panbanisha, who now live at the Great Ape Trust in Des Moines, Iowa, can comprehend instructions and descriptions in spoken English, and they can respond by using 384 lexigrams, which they touch on a keyboard. From 1990 to 2001, researchers tested the bonobos thousands of times, showing them a photo or lexigram or saying an English word. The bonobos then had to select the matching lexigram. The apes chose correctly 12,157 times and made 1497 incorrect choices, although no one thought to consider the errors as data until now.

Lyn found that Kanzi and Panbanisha have arranged hundreds of lexigrams in their minds in a complex, hierarchical manner based mainly on their meaning. She coded the relations between all 1497 sample-error pairs along seven dimensions, including whether the lexigrams looked alike, had English words that sounded alike, or referred to objects in the same category. She found that the errors were not random but patterned. If the lexigram stood for “blackberry,” the error was more likely than chance to sound like blackberry, be edible, be a fruit, or be physically similar. Errors were also more likely to be associated with more than one category. For example, “cherries” are both edibles and fruits, and the word sounds like the correct one, “blackberries.” All this indicated to Lyn that mental representations of the lexigrams must be stored not as simple one-to-one associations but in more complex arrangements. This suggests that, given the chance, bonobos and other apes can acquire systems of meaning that are closer than anyone has thought to what humans do, and that some aspects of language acquisition are not unique to humans. “We begin to see that the biological or species variable is far less important than we thought,” says Susan Savage-Rumbaugh of the Great Ape Trust.

Out of the mouths of babes

Lyn’s analysis is not the first to study errors in creatures that haven’t mastered all the complexities of human speech: For about 20 years, researchers have also used speech errors to study language acquisition in children. Kids do say the darnedest things, but by definition, the true errors are the ones they make with linguistic levels and units they know, explains linguist Jeri Jaeger of the University at Buffalo in New York state, who in 2005 published a book that capped 20 years of collecting kids’ slips, many of them from her three children. It was the first study of the same children’s speech errors over a long period, allowing her to match their errors with their stages of language development. Jaeger’s collection is “unique,” says linguist Annette Hohenberger of Middle East Technical University in Ankara, Turkey, and shows how slips change over time.

Distinguishing true slips took a linguist’s ear and a mother’s patience. Jaeger’s youngest daughter’s exclamation that “She already showed me tomorrow!” wasn’t a true slip, because she didn’t yet know the meaning of “yesterday.” On the other hand, at 16 months, her eldest daughter said “one two three, one two three, one tuwee”–a fusion of “two” and “three,” which was a true slip because she knew the two words were distinct and had regularly pronounced them correctly. This anchors Jaeger’s point that children only make slips with what they know.

Analysis of such speech errors can provide a novel perspective on how children acquire language. Linguists have debated, for instance, whether children need syntactic knowledge to speak in two-word clumps. Jaeger says no. Her data show that when children begin to combine words, at about age 2, they don’t blend phrases or confuse intonations. Such slips require a mature knowledge of syntax. Not until children speak in sentences of three or more words do syntactic errors, such as “sit down this immediately!” (a blend of “sit down this minute” and “sit down immediately”) appear.

It’s long been known that children make more speech errors than adults, but it wasn’t known how or if aging affected error rates. In 2006, Janet Vousden and Elizabeth Maylor at the University of Warwick in the U.K. published the first study tracking speech errors across the life span and reported no significant increase in total errors between young and older adults. However, compared to children, adults made proportionately more errors in which a sound segment was anticipated (frive frantic fat frogs) rather than perseverated (five frantic fat fogs).

That fits with a widely used model of speech errors developed in the 1980s by cognitive scientist Gary Dell of the University of Illinois, Urbana-Champaign. Most linguists think that words and sounds are stored in a kind of network in the brain, connected by variables such as how they sound, their parts of speech, and their meaning. Dell proposed that when sounds or words stored in such a network are selected, this also strengthens or “activates” neighboring words or sounds, which may be misread as the right ones. In his model, people forced to speak quickly make more errors not because they have more opportunities to do so but because the stimulation of neighboring units has less opportunity to fade. Dell also proposes that practice tends to activate present and future units more than past ones. As a result, the more practice a speaker has, the higher the proportion of anticipatory errors, although overall errors decrease. “Whatever makes you more error-prone makes your errors more perseveratory,” explains Dell. Caroline Palmer, a psychologist at McGill University in Montreal, Canada, has found the same effect (among others) in piano performances.

Language need not be spoken, and linguists have long been interested in whether speech and sign are processed the same way. German linguists Hohenberger and Daniela Happ and Helen Leuninger at the University of Frankfurt used a newer method for eliciting slips from German speakers and signers of Deutsche Gebärdensprache (DGS, or German Sign Language), in the first slip study of signers in a language other than American Sign Language. In a series of papers, the most recent published in 2007, they asked speakers and signers to narrate a series of pictures under various stress conditions, such as putting pictures out of order.

They found that all types of slips found in spoken German are also present in DGS, although in different frequencies. The slips also occur with the same basic units. This indicates that signs and words are both stored in the brain as clusters of primary elements that can be flexibly recombined, and it underscores that humans possess a single language faculty regardless of how they deploy it, says Hohenberger.

But there are some differences. For instance, both signers and speakers catch and repair utterances that include mistakes. But signing is relatively slower, so signers catch more errors involving exchanges of individual signing elements, such as hand shapes or location of the sign.

Because of this, Hohenberger speculates that slips of the hand may next contribute to an emerging question in slip-of-the-tongue research. Based on ultrasound studies of speakers’ tongues as they make sound exchanges (better known as spoonerisms, such as “jeef berky” instead of “beef jerky”), phonetician Marianne Pouplier of the University of Munich, Germany, has suggested in several recent papers that speakers don’t substitute one whole sound segment for another as was previously thought. Rather, they attempt to pronounce the two sounds at the same time. This way of thinking about speech errors–as a collision of motor commands rather than as substitutions of mental symbols–might be more reliably investigated in slips of the hand, Hohenberger says, because researchers can capture the slower hand movements more clearly than tongue movements.

Although error studies offer intriguing data, their implications are not always clear. Take the bonobo findings. The apes confused fewer target-error pairs that were either both nouns or both verbs, implying that they don’t take note of parts of speech. “This result argues against the claims made elsewhere that Kanzi has spontaneously developed an elementary grammar,” says primatologist Robert Seyfarth of the University of Pennsylvania.

But Lyn says the error results don’t directly address the question of grammar and don’t contradict earlier findings in which bonobos appeared to prefer certain semantic sequences. Instead, she says, “the results support the idea that [apes’] representation of semantic information is much more complex than has been shown to date.”

Still, the study of bonobo errors does rebut two frequent criticisms of ape language research: that the apes have simply been trained to respond, and that researchers may inadvertently shape the bonobos’ responses. Errors can’t be trained, nor can patterns of errors be deliberately produced. And if researchers were subtly guiding the apes by eye gaze or body posture, Kanzi and Panbanisha might have made far more errors based on simple proximity in the keyboard.

Lyn plans to continue analyzing the error data for other insights into the bonobos’conceptual world. “For me, the error analysis was not to just study one aspect of their symbolic representation,” Lyn says, “but to get a glimpse of how it all hangs together.” Such a big question hasn’t been answered for human language, either, but speech errors will likely be central to the search. Says the University of Arizona’s Garrett: “We have most certainly not reached the limits of that kind of research.”