Friday, October 8, 2010

Amblyopia, brain plasticity and language acquisition

Abstract: I cite a lot of abstracts and speculate.

Amblyopia. Greek for dim eye. Essentially, you may have perfectly healthy eyes, normal vision and still be unable to see details on one eye. Amblyopia may be caused by childhood eye injuries - the visual brain cells receive no stimulation or "input" from the closed eye, and the brain learns to favor the other eye. This condition persists after the eye has healed. If caught early amblyopia may be reversible up to the age of 17. Adults suffering from this condition may try to "exercise" the affected eye, usually with little to no improvement. Some clinical literature contains reports suggesting that "improvement of visual acuity can occur in adult patients with amblyopia, a central disorder of visual acuity, following patching of the normal eye." link

From lazyeye.org:

Dr. Leonard J. Press: "It's been proven that a motivated adult with ...amblyopia who works diligently at vision therapy can obtain meaningful improvement in visual function. As my patients are fond of saying: "I'm not looking for perfection; I'm looking for you to help me make it better". It's important that eye doctors don't make sweeping value judgments for patients. Rather than saying "nothing can be done", the proper advice would be: "You won't have as much improvement as you would have had at a younger age; but I'll refer you to a vision specialist who can help you if you're motivated."

Another vision expert could easily tell you that these places are about taking money from unhappy people for a dubious service. You can always insist that the patient was not motivated enough.

One can certainly draw parallels between the amblyopia dilemma and the battle raging around the "critical period" hypothesis. An interesting question is whether or not amblyopia can be treated in adults and whether adults can perceive and learn to produce foreign sounds in a native fashion. Both issues are related to brain plasticity. Eye patching causes long-lasting profound negative effects in young children and animals fairly quickly. The brain is very plastic. It adapts quickly (which is good and bad). Sometimes, occlusion of the good eye aimed at fixing the amblyopic eye can lead to amblyopia in the occluded eye and a return of normal vision in the previously amblyopic eye. Adults don't react quickly, if at all. One cannot easily treat or even cause adult amblyopia.

"Clinical disorders of brain plasticity are common in the practice of child neurology. Children have an enhanced capacity for brain plasticity compared to adults as demonstrated by their superior ability to learn a second language or their capacity to recover from brain injuries or radical surgery such as hemispherectomy for epilepsy. Basic mechanisms that support plasticity during development include persistence of neurogenesis in some parts of the brain, elimination of neurons through apoptosis or programmed cell death, postnatal proliferation and pruning of synapses, and activity-dependent refinement of neuronal connections. Brain plasticity in children can be divided into four types: adaptive plasticity that enhances skill development or recovery from brain injury; impaired plasticity associated with cognitive impairment; excessive plasticity leading to maladaptive brain circuits; and plasticity that becomes the brain's ‘Achilles’ Heel’ because it makes it vulnerable to injury.

Clinical disorders of brain plasticity
Brain and Development
Volume 26, Issue 2, March 2004, Pages 73-80

"lack of linguistic acuity"

Feral children have been deprived of all human contact and later in life they have serious trouble learning the most basic communication patterns. No child has ever been exposed to the full range of human speech. An idea for early education? Adult language learners have all been shut off from target language "input" during a "critical period" of their lives (i.e. childhood). Their language circuits have been naturally established through stimulation. As adults they look at the world through their lazy native eye. Can we hope to bypass this by patching it through passive assimilation? Explicit instruction might help us see better right away, but the danger is that the learner will keep the artificial. I am not sure that through pure passive assimilation one is not slowly figuring things out on his own.

Mission Impossible?
Understanding English with French Ears


"Prototype formation P. Kuhl (1994, 1992, 1984,) has proposed a theory of speech development called the Native Language Magnet Theory (NLM). From birth babies fine tune their perception of native language vowels by storing prototype representations of these sounds in their memories thereby eliminating the flexibility to perceive foreign language sounds. By three months babies are capable of retaining vowel sounds which means that their memory for sounds is taking shape (Jusczyk, 1995). By six months babies begin to form vowel prototypes for their native language. For example Boysson-Bardies et al. (1992) studied the babbling of 6 – 8 month infants and found that for English babies 21% of their productions were "ha". An amazing 11%
of the production of French babies was "ha" too. However, when infants were at the 15-word phase of production, French babies no longer produce "h" and English babies produced the same quantity of "h" sounds as an adult would. By the age of nine months, prototypes are forming and babies are beginning to ignore sounds that do not belong to their native language and focus their attention on native language vowels...

When learning a second language, listeners do not notice phonological regularities in the target language because they are using their native language automatic processing system. Second language listeners have a deficient phonological representation of the second language. Compensating for this deficient phonological representation puts a strain on working memory. French listeners, not having developed their capacity to encode and store English phonological representations, will have more difficulty understanding fluent speech in that language...

Therefore Japanese listeners can easily hear the difference between English phonemes in a laboratory situation. However, during a recent study using ERP (event-related brain potential) tests, no brain activity was observed for Japanese listeners with an "r" or "l" sound stimulus. Of course the same tests, when give to English speaking listeners, showed an automatic reaction (Locke, 1997)...

The origin of the difficulty for French people in perceiving spoken English is found in these characteristics deeply buried in the early stages of linguistic development. Not being aware of the linguistic cues used for understanding their own language, French people cannot voluntarily modify these cues to master another language. It is not possible to list all of the characteristics of the French and English languages which are the source of these differences. However some of them are indispensable for understanding why it is so difficult for a French person to understand English. The prosodic system of the two languages is essential for dividing fluent speech into word segments. Basically the French automatic processing system is constantly monitoring for syllable segments. The English system is searching for the stressed syllable. In a fascinating study by Cutler et al (1983) the conclusion was : "We conclude, therefore that the syllabification strategy is characteristic of listeners rather than of stimulus language. We suggest that listeners who have acquired French as their native language have developed the syllabification procedure, natural to the human language processing system, into an efficient comprehension strategy. On the other hand, listeners whose native language is English, where this strategy would not necessarily achieve greater comprehension efficiency, have not included syllabification in their repertoire of processing strategies." In later studies they showed that French listeners continue to use syllabification strategies "even when listening to English words" (1986) and native English speakers use a stress-based segmentation system even when listening to French words...

Besides the prosodic system, basic differences in phonemic structure influence the automatic processing system. English phonemes are characterized by movement whereas French phonemes are stable. In 1990 Drach filmed an American saying the word "know" and a French person saying "nos" in French. The resulting films show that that the jaws, tongue and lips of the American are constantly moving but that in French all of these organs are relatively stable. A second important difference in the phonemic structure of the two languages is that both length and reduction are significant in English whereas French vowels are considered "pure". Finally the automatic processing system does not depend on the sound system alone, but on other linguistic strategies. Since English usually follows a strict word order (Subject-Verb-Object), English speakers rely heavily on word order to interpret a sentence (MacWhinney et al [1984]). Especially when speaking spontaneously, French people rarely follow the Subject-Verb-Object pattern.

The above description barely touches on the complexity of the differences between French and English. Understanding spontaneous speech depends on very diverse elements many of which are buried deep in our earliest linguistic acquisition. We are totally unaware of most of these elements and therefore cannot easily modify our listening strategy...

Taken together, the above findings led to the hypothesis that the problems of listening comprehension could better be addressed through a method that would access the automatic processing system of the subject. This method would have to take into account the fundamental differences between the French and English languages but would not explicitly teach them...

As an application of Anderson's theory on the importance of training for automaticity
acquisition (Perruchet, 1988), the repetition of regular phonological sequences should facilitate the formation of a framework or a structure that would permit the development of memory span. As the capacity for imitation of longer and longer phrases develops, second language learners have available more raw material from which they can construct a linguistic system. Automatic processing cannot develop unless working memory can retain units of sufficient length (Spiedel, 1989)...

Group R and L subjects used headphones with built-in microphones enabling them to hear both their own voices and the words and phrases recorded on the cassettes. The use of this system allowed subjects to work under audio-phonatory feedback conditions, which is indispensable for the method. By listening to this feedback, subjects automatically modified their production to make it better correspond to the model.

Results
Substantial Improvement

The overall level of all of the subjects improved substantially. Even though these subjects had been studying English since the age of eleven, attending an average of eight years of English classes, a large portion (between 20 and 30 %) understood almost no spoken English. At the end of the study, the percentage of those who understood very little went down to less than 10%.

For all the groups, individual results are far from homogeneous. Some subjects did not progress at all whereas others doubled their score, going from 7/20 to 14/20 for example. This could easily be explained by analyzing the listening strategy used. Subjects in Group E who enjoyed studying the differences between English and French, using the top down reasoning method proposed, benefited from the method. On the other hand, subjects in the implicit learning groups who repeated with pleasure, allowing themselves to follow the music of the language profited from a bottom up approach...

The most striking difference, however, concerned the students who had been raised as Arabic, Portuguese or an African language bilinguals. These students, who had learned this other language with very little contact with reading and writing, were very receptive to the methods of Group R.

(Comment: they were already on their third language.)

Conclusion

These results show an overall improvement in listening comprehension. However, they do not take into account the reasons for this improvement. Group E subjects progressed through the use of explicit learning processes whereas Group R and L subjects improved through their implicit learning processes. On a long-term basis, it would seem that these two processes would not lead to the same results. The ideal situation would be to reeducate the automatic processing system. Having to compensate for a deficiency in this system by using attentional processes inevitably leads to a slower and often erroneous interpretation of oral discourse.

Even if listeners are able to use explicit knowledge of the phonetic characteristics of the language, allowing them a better interpretation of the representation computed from the sensory input, without automatisation performance will still deteriorate. Processes that should have been carried out automatically, by requiring attention, will slow down the system and overload working memory.

Any activity that could help to lighten memory load by favorising automatic processing should be developed and integrated into academic programs. We feel that because of the complexity of both the phonological system and the cognitive and linguistic resources necessary for oral comprehension, explicitly teaching difficult points will lead to failure. It is not sufficient to work on the symptoms or the apparent difficulties of second language acquisition because this does not access the source of the problem. We are convinced that certain phonological information is only accessible through progressively introducing English temporal patterns. Procedures that lead to avoiding the use of higher order reasoning and explicit learning strategies, allowing more receptivity to a novel phonological system, should be developed."

(Comment: I don't understand why explicit learning needs to lead to failure or lack of direct, unpolluted automatization. Such students are not sentenced to use their knowledge as crutches forever.)

Anatomical Correlates of Foreign Speech Sound Production

Those little grey cells:

"Previous work has shown a relationship between brain anatomy and how quickly adults learn to perceive foreign speech sounds. Faster learners have greater asymmetry (left > right) in parietal lobe white matter (WM) volumes and larger WM volumes of left Heschl's gyrus than slower learners. Here, we tested native French speakers who were previously scanned using high-resolution anatomical magnetic resonance imaging. We asked them to pronounce a Persian consonant that does not exist in French but which can easily be distinguished from French speech sounds, the voiced uvular stop."

(Comment: my guess here is that these random French speakers did not beef up their brains through language learning. This would perhaps suggest that Emil Krebs' superbrain was naturally endowed for language learning. I could ask, I suppose.)

Second-language speech perception and production in adult learners
before and after short-term immersion


"Several studies have reported that added second-language (L2) experience results in a more native-like L2 speech performance in adult L2 learners. The amount of experience has often been quantified in terms of the length of residence (LOR) in an L2 speaking community. While some studies reported an effect of LOR on L2 performance (e.g., Bohn & Flege, 1990; Flege, Bohn, & Jang, 1997; Flege &
Hillenbrand, 1984; Yamada, 1995) other studies reported no effect of LOR (e.g.,
Flege, 1988, 1993; Flege, Munro, & Skelton, 1992)."

"The fact that the Experience Group received significantly higher scores after a
relatively short period of immersion in an English-language environment and the fact
that one subject achieved native-like pronunciation ratings do not appear to be
consistent with the Critical Period Hypothesis..."

"Experiment 3A and 3B supported Hypothesis 4 that English language experience would have a stronger effect on the perception than the production of L2 speech sounds."

(Comment: you have the ear for it, but your tongue trips).

..."the findings suggest that the phonetic system in adults is still malleable in young adults and that perception leads production in L2 speech acquisition. The finding that global foreign accent ratings improved but that no improvement was found in the select speech sounds might suggest that important improvements happened in the prosodic dimension."

And now, three blind mice (and a cat).

Abstract

The adult cerebral cortex can adapt to environmental change. Using monocular deprivation as a paradigm, we find that rapid experience-dependent plasticity exists even in the mature primary visual cortex. However, adult cortical plasticity differs from developmental plasticity in two important ways...

A primary function of the brain is to integrate the individual into a continually changing environment. Some aspects of this integration are accomplished through developmental processes, other aspects through learning. Although learning can occur throughout life, many behaviors, from language to sexual behavior, are shaped profoundly by early life experience. In this study, we have examined how the adaptive capacity of the cerebral cortex changes with maturation.

link

Amblyopia

1 comment:

Bakunin said...
This comment has been removed by a blog administrator.