I am still alive (yay!) and still learning German. My active language learning attempts were sidetracked by the great nemesis - life. I have been...
- watching a lot of German movies (or, rather, movies in German)
- listening to German music (Schlagers!)
I finished the two Goethe language courses in August. The advanced course was pretty good. The instructor was a real pro. The intermediate course was weak. The class felt at times like a nest of bald eagles (eek!eek!eek!). The best and most engaging class I had was with a substitute.
Although I've never formally studied German, I managed to integrate well into the more advanced class. My vocabulary was easily the most advanced, my repertoire of ready-made grammatical constructions a bit lacking. A couple of the students had majored in German.
My first attempts at communication resulted in a few errors, often word-order-related. My all-German learning experience apparently did not manage to imbue me with the awesome power of being able to put the verb in the right place. I do have a "feeling" for it but real-time execution is another matter. The class has certainly helped me to overcome the embarrassment of speaking in a new foreign tongue
Written assignments - few mistakes and I managed to put together some impressive sausages er sentences but writing even one page took forever. Very tiresome. If I can only force myself to do more of it.
Reading - I recently bought two ebook readers. One is the new Sony reader, the other is the Astak EZReader. I will post about it when I actually finish a book. So far I am pleased with the Sony. I bought the EZReader because of language and format support. I still have to play with it a little but I am afraid it's not a good machine.
Wednesday, December 9, 2009
Sunday, August 16, 2009
input only - to the extreme
Richard Boydell suffered from cerebral palsy and acquired language as a child through listening and reading alone. His first attempt at communication was at age 30, with the aid of a typewriter. He later became a computer programmer. A short excerpt from a computer-related book:
"Richard Boydell was born with a first-class brain, indomitable and resourceful
parents, ... He was born in 1933 with severe jaundice and cerebral palsy, ..."
Richard Boydell:
"I acquired an understanding of language by listening to those around me. Later, thanks to my mother's tireless, patient work I began learning to read and so became familiar with written as well as spoken language. As my interest developed, particularly in the field of science, I read books and listened to educational programs on radio and, later, television which were at a level that was normal, or sometimes rather above, for my age. Also when people visited us ... I enjoyed listening to the conversation even though I could only play a passive role and could not take an active part in any discussion ... As well as reading books and listening to radio and television .... I read the newspaper every day to keep in touch with current events".
Stephen Krashen "The Input Hypothesis: Issues and Implications" (1985), citing Adrian Fourcin's 1975 article "Visual feedback and the acquisition of intonation".
Fourcin's article is also mentioned in "Foundations of language development" by Elizabeth Lenneberg (1975). Also mentioned in "Language development in exceptional circumstances" by Dorothy Bishop and Kay Mogford.
According to Fourcin, Boydell's writing was "elegantly phrased" although he had never written anything before. Krashen believes that Boydell's ability to express himself was due to his listening and reading and he uses this as an example for his input hypothesis.
Krashen uses Boydell as an argument against the "comprehensible output" theory.
Anyone seeking parallels with adults trying to learn a foreign language through passive exposure should keep in mind that Boydell learned English as a child. English was his mother tongue. Native speakers use the language internally, during abstract reasoning and problem solving. How often we actually "think" in our mother tongue is a separate issue but internal use of language is definitely a native speaker characteristic. A passive adult consuming foreign entertainment will not behave as a child acquiring his mother tongue. Advanced speakers of a foreign language would most likely start using their new language internally only after a prolonged period of habitual daily language production and interaction with native speakers. Another issue is that Boydell could speak only with extreme difficulty due to cerebral palsy. His writing was impeccable, but I cannot find any information about his spoken language skills.
See: Case Histories and the Comprehension Hypothesis by Stephen Krashen
"Richard Boydell was born with a first-class brain, indomitable and resourceful
parents, ... He was born in 1933 with severe jaundice and cerebral palsy, ..."
Richard Boydell:
"I acquired an understanding of language by listening to those around me. Later, thanks to my mother's tireless, patient work I began learning to read and so became familiar with written as well as spoken language. As my interest developed, particularly in the field of science, I read books and listened to educational programs on radio and, later, television which were at a level that was normal, or sometimes rather above, for my age. Also when people visited us ... I enjoyed listening to the conversation even though I could only play a passive role and could not take an active part in any discussion ... As well as reading books and listening to radio and television .... I read the newspaper every day to keep in touch with current events".
Stephen Krashen "The Input Hypothesis: Issues and Implications" (1985), citing Adrian Fourcin's 1975 article "Visual feedback and the acquisition of intonation".
Fourcin's article is also mentioned in "Foundations of language development" by Elizabeth Lenneberg (1975). Also mentioned in "Language development in exceptional circumstances" by Dorothy Bishop and Kay Mogford.
According to Fourcin, Boydell's writing was "elegantly phrased" although he had never written anything before. Krashen believes that Boydell's ability to express himself was due to his listening and reading and he uses this as an example for his input hypothesis.
Krashen uses Boydell as an argument against the "comprehensible output" theory.
Anyone seeking parallels with adults trying to learn a foreign language through passive exposure should keep in mind that Boydell learned English as a child. English was his mother tongue. Native speakers use the language internally, during abstract reasoning and problem solving. How often we actually "think" in our mother tongue is a separate issue but internal use of language is definitely a native speaker characteristic. A passive adult consuming foreign entertainment will not behave as a child acquiring his mother tongue. Advanced speakers of a foreign language would most likely start using their new language internally only after a prolonged period of habitual daily language production and interaction with native speakers. Another issue is that Boydell could speak only with extreme difficulty due to cerebral palsy. His writing was impeccable, but I cannot find any information about his spoken language skills.
See: Case Histories and the Comprehension Hypothesis by Stephen Krashen
Friday, August 14, 2009
Language and the Brain
The language loop is found in the left hemisphere in about 90% of right-handed persons and 70% of left-handed persons, language being one of the functions that is performed asymmetrically in the brain. Surprisingly, this loop is also found at the same location in deaf persons who use sign language. about.com
“Production of Oral or Written Language. Normal spontaneous speech begins with the intent to communicate followed by the internal organization of the thought, access to the words to be used in expressing the thought or idea and their phonetic representations (word sounds), the initiation of the intention, and finally the actual production (articulation) of speech. Spontaneous writing makes similar demands, except rather than requiring the external articulation of phonemes, the phonemes are converted into written symbols (graphemes). In typical dominance patterns, most of these language functions are mediated primarily by the left hemisphere. Whether the left, right, or both hemispheres are “responsible” for the intent to communicate is unclear. However the failure to initiate spontaneous communication typically has been associated with left anterior (frontal) lesions.
Language Reproduction. In contrast to language production, language reproduction, in its broadest sense, refers to the ability to reproduce language in either the same or alternate form from which it was perceived. Typically when we think of this aspect of language, we think of the repetition of spoken language or the transcription of spoken or written language. However, reading aloud (as opposed to silent reading for comprehension) also may be considered language reproduction.
Word-finding ability. The ability to associate a “word” with either an internal (thought or recollection) or external (perception) representation of an object or idea is a fundamental function of language. Creating these associations (i.e. words) and then retrieving them, either spontaneously or on cue, appear to be skills relegated to the left hemisphere…
Word recognition. In addition to being able to retrieve a word when needed (verbal expression), linguistic communication also demands that when a word is perceived, either aurally (auditory comprehension) or visually (reading comprehension), its meaning and/or associations are understood (verbal comprehension). Language comprehension may be broken down further into its semantic and syntactic components.
While the left hemisphere clearly is dominant for comprehending both semantics and syntax, again in split-brain studies the right hemisphere has been shown to have some limited semantic capacity and even more limited ability to process syntax independent of the left hemisphere. However, somewhat paradoxically, in the presence of an intact left hemisphere, right hemispheric damage may lead to significant difficulties in appreciating subtle or thematic aspects of communication, especially when metaphors or sarcasm are employed.
Internal use of language. Language not only is used for communicating with others, it also is used internally. It serves as an important base for abstract reasoning and problem solving. While both hemispheres contribute to the development of new and creative insights into the world around us, many of the problems presented to us on a day-to-day basis are represented in verbal terms. Even if not, we often try to assign words to our ideas, motivations, imaginings, and conflicts in order to analyze, manipulate, and weigh their various permutations and potential outcomes. Strictly speaking, what we define as rational thought and abstractive capacities appear to be the application of formal linguistic principles to a particular problem. Again, while the split-brain work has suggested that the right hemisphere certainly is capable of problem solving and decision making (in certain circumstances, apparently even more efficiently than the left hemisphere), it appears that it is the left hemisphere that mediates such thought processes in most individuals.”
Clinical Neuroanatomy by John Mendoza, Anne L. Foundas p. 346
Functions of different parts of the cortex (according to the Wernicke-Geschwind model)
Reading
Reading aloud. Written language is received by the visual cortex and transmitted to the angular gyrus. The signal is then sent to Broca’s area and the adjacent motor complex for articulation.
Silent reading involves the visual cortex, the angular gyrus, Wernicke’s area and Broca’s area.
The angular gyrus receives the visual information from the visual cortex and recodes it into auditory form and then transmits it to Wernicke’s area for interpretation.
Speech production The signal moves from Wernicke’s area to Broca’s area which then transmits it to the motor complex. Obviously spontanous speech production is a lot more than that. This space reserved for a good explanation.
Listening
Passive listening.
“During passive listening, activation is almost exclusively limited to the superior temporal areas, possibly due to the fact that no language output (naming) is being required. Sound stimuli that require little or no linguistic analysis, such as noise, pure tones, and passive listening to uninteresting text, produce nearly symmetrical activity in or around the superior temporal gyrus of each hemisphere (Binder et al. 1994). When the task requires listening for comprehension, significant lateralization to the language-dominant hemisphere is present (Schlosser et al. 1998.)
Stimuli of higher presentation rates or greater difficulty produce greater activation. When words are presented too slowly, allowing time for the subject to daydream between stimuli, activation is greatly reduced. Tasks that are uninteresting, although “language rich” may produce activation of primary auditory areas but little activation of language areas. Stimuli that are challenging or interesting produce greater activation.”
Audiology by Ross J. Roeser, Michael Valente, Holly Hosford-Dunn
Active listening
“Brain regions specifically implicated in listening to the spoken word (active listening) have been identified on MRI scans by subtracting the signal from regions (such as auditory cortex) that are engaged when listening to random tones (passive listening) from the total signal produced by listening to speech.
Listening to speech activates:
“Wernicke’s area on the left side, which is thought to permit discrimination of verbal from non-verbal material; The angular gyrus which identifies phonemes; The middle temporal gyrus which identifies phonemes; The middle temporal gyrus (area 21) and area 37 identify words from phoneme strings and tap into semantic networks located in the left dorsolateral preffontal cortex (areas 9 and 46), that must be searched to traduce the meaning of speech; Broca’s area is activated, because when listening to speech we covertly rehearse the articulatory commands needed to pronounce the words, a process referred to as subvocal articulation.”
Neuroscience by Alan Longstaff
Reading
“Clearly reading requires visual processing. Subsequently, in novice readers, the parieto-temporal region (angular gyrus and Wernicke’s area) dismantles words into phonemes so that they can be identified. However, in experienced readers the extra-striate occipito-temporal cortex (area 19) recognizes entire words instantly. Activation of a network that links the supramarginal gyrus (area 40), and area 37, to the anterior part of the Broca’s area (area 45), via the insula, allows access to semantic networks in the dorsolateral prefrontal cortex so that the meaning and pronunciation of the words can be retrieved. Finally, either subvocal articulation or reading aloud is accompanied by activation of the whole of Broca’s area, the medial supplementary motor area (area 6), motor areas subserving face and tongue (area 4), and the contralateral cerebellar hemisphere.”
Neuroscience by Alan Longstaff
The Wernicke-Geschwind model
“Norman Geschwind assembled these clues into an explanation how we use language. When you read aloud, the words (1) register in the visual area, (2) are relayed to the angular gyrus that transforms the words into an auditory code that is (3) received and understood in the nearby Wernicke’s area and (4) sent to Broca’s area, which (5) controls the motor complex, creating the pronounced word. Damage to the angular gyrus leaves the person able to speak and understand but unable to read. Damage to Wernicke’s area disrupts understanding. (Comment: Reading, both aloud and for comprehension, is usually impaired in Wernicke's aphasia.) Damage to Broca’s area disrupts speaking (Comment: often also reading aloud).
The general principle bears repeating: complex abilities result from the intricate coordination of many brain areas. Said another way, the brain operates by dividing its mental functions – speaking, perceiving, thinking, remembering – into subfunctions. Our conscious experience seems indivisible. The brain computes the word’s form, sound, and meaning using different neural networks… To sum up, the mind’s subsystems are localized in particular brain regions, yet the brain acts as a unified whole.”
Psychology, Seventh Edition in Modules
by David G. Myers
A critique of the Wernicke-Geschwind model
“PET studies have revealed that because visual linguistic stimuli are not transformed into an auditory representation, visual and auditory linguistic stimuli are processed independently by modality-specific pathways that have independent access to Broca's area. Moreover, because the linguistic processing of visual stimuli can bypass Wernicke's area altogether, other brain regions must be involved with storing the meaning of words (Mayeux & Kandel, 1991,p. 845; also see Kolb & Whishaw, 1990, pp. 582-583). Thus, not only do there seem to be separate -- parallel -- pathways for processing the phonological and semantic aspects of language, language processing clearly involves a larger number of areas and a more complex set of interconnections than just those identified by the W-G model (Wernicke-Geschwind model) (Mayeux & Kandel, 1991, p. 845). Indeed, the PET studies support the notion that language production and comprehension involve processing along multiple routes, not just one:
No one area of the brain is devoted to a very complex function, such as 'syntax' or 'semantics'. Rather, any task or function utilizes a set of brain areas that form an interconnected, parallel, and distributed hierarchy. Each area within the hierarchy makes a specific contribution to the performance of the task. (Fiez & Petersen, 1993, 287)."
Using Pet Toward A Naturalized Model Of Human Language Processing
By Robert S. Stufflebeam
Active reading
“Recall that according to Wernicke both visual and auditory information are transformed into a shared auditory representation of language. This information is then conveyed to Wernicke’s area, where it becomes associated with meaning before being transformed in Broca’s area into output as written or spoken language…Using PET imaging, they determined how individual words are coded in the brain when the words are read or heard. They found that when words are heard, Wernicke’s area becomes active, but when words are seen but not heard or spoken, there is no activation of Wernicke’s area. The visual information from the occipital cortex appears to be conveyed directly to Broca’s area without first being transformed into an auditory representation in the posterior temporal cortex.”
Essentials of neural science and behavior by Eric R. Kandel, James Harris Schwartz, Thomas M. Jessell
“Production of Oral or Written Language. Normal spontaneous speech begins with the intent to communicate followed by the internal organization of the thought, access to the words to be used in expressing the thought or idea and their phonetic representations (word sounds), the initiation of the intention, and finally the actual production (articulation) of speech. Spontaneous writing makes similar demands, except rather than requiring the external articulation of phonemes, the phonemes are converted into written symbols (graphemes). In typical dominance patterns, most of these language functions are mediated primarily by the left hemisphere. Whether the left, right, or both hemispheres are “responsible” for the intent to communicate is unclear. However the failure to initiate spontaneous communication typically has been associated with left anterior (frontal) lesions.
Language Reproduction. In contrast to language production, language reproduction, in its broadest sense, refers to the ability to reproduce language in either the same or alternate form from which it was perceived. Typically when we think of this aspect of language, we think of the repetition of spoken language or the transcription of spoken or written language. However, reading aloud (as opposed to silent reading for comprehension) also may be considered language reproduction.
Word-finding ability. The ability to associate a “word” with either an internal (thought or recollection) or external (perception) representation of an object or idea is a fundamental function of language. Creating these associations (i.e. words) and then retrieving them, either spontaneously or on cue, appear to be skills relegated to the left hemisphere…
Word recognition. In addition to being able to retrieve a word when needed (verbal expression), linguistic communication also demands that when a word is perceived, either aurally (auditory comprehension) or visually (reading comprehension), its meaning and/or associations are understood (verbal comprehension). Language comprehension may be broken down further into its semantic and syntactic components.
While the left hemisphere clearly is dominant for comprehending both semantics and syntax, again in split-brain studies the right hemisphere has been shown to have some limited semantic capacity and even more limited ability to process syntax independent of the left hemisphere. However, somewhat paradoxically, in the presence of an intact left hemisphere, right hemispheric damage may lead to significant difficulties in appreciating subtle or thematic aspects of communication, especially when metaphors or sarcasm are employed.
Internal use of language. Language not only is used for communicating with others, it also is used internally. It serves as an important base for abstract reasoning and problem solving. While both hemispheres contribute to the development of new and creative insights into the world around us, many of the problems presented to us on a day-to-day basis are represented in verbal terms. Even if not, we often try to assign words to our ideas, motivations, imaginings, and conflicts in order to analyze, manipulate, and weigh their various permutations and potential outcomes. Strictly speaking, what we define as rational thought and abstractive capacities appear to be the application of formal linguistic principles to a particular problem. Again, while the split-brain work has suggested that the right hemisphere certainly is capable of problem solving and decision making (in certain circumstances, apparently even more efficiently than the left hemisphere), it appears that it is the left hemisphere that mediates such thought processes in most individuals.”
Clinical Neuroanatomy by John Mendoza, Anne L. Foundas p. 346
Functions of different parts of the cortex (according to the Wernicke-Geschwind model)
Reading
Reading aloud. Written language is received by the visual cortex and transmitted to the angular gyrus. The signal is then sent to Broca’s area and the adjacent motor complex for articulation.
Silent reading involves the visual cortex, the angular gyrus, Wernicke’s area and Broca’s area.
The angular gyrus receives the visual information from the visual cortex and recodes it into auditory form and then transmits it to Wernicke’s area for interpretation.
Speech production The signal moves from Wernicke’s area to Broca’s area which then transmits it to the motor complex. Obviously spontanous speech production is a lot more than that. This space reserved for a good explanation.
Listening
Passive listening.
“During passive listening, activation is almost exclusively limited to the superior temporal areas, possibly due to the fact that no language output (naming) is being required. Sound stimuli that require little or no linguistic analysis, such as noise, pure tones, and passive listening to uninteresting text, produce nearly symmetrical activity in or around the superior temporal gyrus of each hemisphere (Binder et al. 1994). When the task requires listening for comprehension, significant lateralization to the language-dominant hemisphere is present (Schlosser et al. 1998.)
Stimuli of higher presentation rates or greater difficulty produce greater activation. When words are presented too slowly, allowing time for the subject to daydream between stimuli, activation is greatly reduced. Tasks that are uninteresting, although “language rich” may produce activation of primary auditory areas but little activation of language areas. Stimuli that are challenging or interesting produce greater activation.”
Audiology by Ross J. Roeser, Michael Valente, Holly Hosford-Dunn
Active listening
“Brain regions specifically implicated in listening to the spoken word (active listening) have been identified on MRI scans by subtracting the signal from regions (such as auditory cortex) that are engaged when listening to random tones (passive listening) from the total signal produced by listening to speech.
Listening to speech activates:
“Wernicke’s area on the left side, which is thought to permit discrimination of verbal from non-verbal material; The angular gyrus which identifies phonemes; The middle temporal gyrus which identifies phonemes; The middle temporal gyrus (area 21) and area 37 identify words from phoneme strings and tap into semantic networks located in the left dorsolateral preffontal cortex (areas 9 and 46), that must be searched to traduce the meaning of speech; Broca’s area is activated, because when listening to speech we covertly rehearse the articulatory commands needed to pronounce the words, a process referred to as subvocal articulation.”
Neuroscience by Alan Longstaff
Reading
“Clearly reading requires visual processing. Subsequently, in novice readers, the parieto-temporal region (angular gyrus and Wernicke’s area) dismantles words into phonemes so that they can be identified. However, in experienced readers the extra-striate occipito-temporal cortex (area 19) recognizes entire words instantly. Activation of a network that links the supramarginal gyrus (area 40), and area 37, to the anterior part of the Broca’s area (area 45), via the insula, allows access to semantic networks in the dorsolateral prefrontal cortex so that the meaning and pronunciation of the words can be retrieved. Finally, either subvocal articulation or reading aloud is accompanied by activation of the whole of Broca’s area, the medial supplementary motor area (area 6), motor areas subserving face and tongue (area 4), and the contralateral cerebellar hemisphere.”
Neuroscience by Alan Longstaff
The Wernicke-Geschwind model
“Norman Geschwind assembled these clues into an explanation how we use language. When you read aloud, the words (1) register in the visual area, (2) are relayed to the angular gyrus that transforms the words into an auditory code that is (3) received and understood in the nearby Wernicke’s area and (4) sent to Broca’s area, which (5) controls the motor complex, creating the pronounced word. Damage to the angular gyrus leaves the person able to speak and understand but unable to read. Damage to Wernicke’s area disrupts understanding. (Comment: Reading, both aloud and for comprehension, is usually impaired in Wernicke's aphasia.) Damage to Broca’s area disrupts speaking (Comment: often also reading aloud).
The general principle bears repeating: complex abilities result from the intricate coordination of many brain areas. Said another way, the brain operates by dividing its mental functions – speaking, perceiving, thinking, remembering – into subfunctions. Our conscious experience seems indivisible. The brain computes the word’s form, sound, and meaning using different neural networks… To sum up, the mind’s subsystems are localized in particular brain regions, yet the brain acts as a unified whole.”
Psychology, Seventh Edition in Modules
by David G. Myers
A critique of the Wernicke-Geschwind model
“PET studies have revealed that because visual linguistic stimuli are not transformed into an auditory representation, visual and auditory linguistic stimuli are processed independently by modality-specific pathways that have independent access to Broca's area. Moreover, because the linguistic processing of visual stimuli can bypass Wernicke's area altogether, other brain regions must be involved with storing the meaning of words (Mayeux & Kandel, 1991,p. 845; also see Kolb & Whishaw, 1990, pp. 582-583). Thus, not only do there seem to be separate -- parallel -- pathways for processing the phonological and semantic aspects of language, language processing clearly involves a larger number of areas and a more complex set of interconnections than just those identified by the W-G model (Wernicke-Geschwind model) (Mayeux & Kandel, 1991, p. 845). Indeed, the PET studies support the notion that language production and comprehension involve processing along multiple routes, not just one:
No one area of the brain is devoted to a very complex function, such as 'syntax' or 'semantics'. Rather, any task or function utilizes a set of brain areas that form an interconnected, parallel, and distributed hierarchy. Each area within the hierarchy makes a specific contribution to the performance of the task. (Fiez & Petersen, 1993, 287)."
Using Pet Toward A Naturalized Model Of Human Language Processing
By Robert S. Stufflebeam
Active reading
“Recall that according to Wernicke both visual and auditory information are transformed into a shared auditory representation of language. This information is then conveyed to Wernicke’s area, where it becomes associated with meaning before being transformed in Broca’s area into output as written or spoken language…Using PET imaging, they determined how individual words are coded in the brain when the words are read or heard. They found that when words are heard, Wernicke’s area becomes active, but when words are seen but not heard or spoken, there is no activation of Wernicke’s area. The visual information from the occipital cortex appears to be conveyed directly to Broca’s area without first being transformed into an auditory representation in the posterior temporal cortex.”
Essentials of neural science and behavior by Eric R. Kandel, James Harris Schwartz, Thomas M. Jessell
Thursday, August 13, 2009
Evidence of Mirror Neurons in Human Inferior Frontal Gyrus
That's Broca's area. Mirror neurons may indeed play an important part in second language acquisition. Mirror neurons are supposed to be all about observation and imitation. Interhemispheric foreign language learning
"There is much current debate about the existence of mirror neurons in humans. To identify mirror neurons in the inferior frontal gyrus (IFG) of humans, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging. Subjects either executed or observed a series of actions. Here we show that in the IFG, responses were suppressed both when an executed action was followed by the same rather than a different observed action and when an observed action was followed by the same rather than a different executed action. This pattern of responses is consistent with that predicted by mirror neurons and is evidence of mirror neurons in the human IFG. 10.1523/JNEUROSCI.2668-09.2009"
link
First Evidence Found of Mirror Neuron’s Role in Language
"...the findings suggest that mirror neurons play a key role in the mental "re-enactment" of actions when linguistic descriptions of those actions are conceptually processed."
"There is much current debate about the existence of mirror neurons in humans. To identify mirror neurons in the inferior frontal gyrus (IFG) of humans, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging. Subjects either executed or observed a series of actions. Here we show that in the IFG, responses were suppressed both when an executed action was followed by the same rather than a different observed action and when an observed action was followed by the same rather than a different executed action. This pattern of responses is consistent with that predicted by mirror neurons and is evidence of mirror neurons in the human IFG. 10.1523/JNEUROSCI.2668-09.2009"
link
First Evidence Found of Mirror Neuron’s Role in Language
"...the findings suggest that mirror neurons play a key role in the mental "re-enactment" of actions when linguistic descriptions of those actions are conceptually processed."
Tuesday, August 11, 2009
Looking under the hood
Looking under the hood of language processing and language learning
The Broca’s area is a region of the brain responsible for speech articulation. It controls the motor complex which is responsible for speech production. “For a long time, it was assumed that the role of Broca's area was more devoted to language production than language comprehension. However, recent evidence demonstrates that Broca's area also plays a significant role in language comprehension. Patients with lesions in Broca's area who exhibit agrammatical speech production also show inability to use syntactic information to determine the meaning of sentences.[4] Also, a number of neuroimaging studies have implicated an involvement of Broca's area, particularly of the pars opercularis of the left inferior frontal gyrus, during the processing of complex sentences.(Wikipedia)
“More recently, Broca's area has been implicated in music processing, leading some researchers to suggest music may be processed as a language. Imaging studies have revealed that professional musicians trained at an early age have an increased volume of gray matter in Broca's area. Broca's area is part of a language and music processing network that includes Wernicke's area, the superior temporal sulcus, Heschl's gyrus, planum polare, planum temporale, and the anterior superior insular cortices.” Link
(Comment: Broca's area is essential for producing language and mediating grammar. The Broca’s area of the brain of Emil Krebs was larger and organized differently from that of monolingual men. It is unclear whether this was from birth or whether it was due to language learning).
Superior temporal gyrus: "a gyrus in the upper part of the temporal lobe. Contains the primary auditory cortex. The anterior part of this region has been implicated in generating the aha! experience of insight.” Link
Averbia is a specific type of anomia in which the subject has trouble remembering only verbs. This is caused by damage to the frontal cortex, in or near Broca's area.
Brain imaging findings:
Thinking about words makes the Broca’s area light up.
Thinking about words and speaking generates widespread activity.
Inner speech – Broca’s area active
Word retrieval (lexical information) – Broca’s involved
Prosody (musical intonation of speech) – Broca’s involved
Preparing to speak – Broca’s area active
The superior temporal gyrus contains several important structures of the brain, including: Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of sound; Wernicke's area, Brodmann 22p, an important region for the processing of speech so that it can be understood as language. (Wikipedia)
The auditory association area (Wernicke's area, or area 22) is an important region for the processing of acoustic signals so that they can be distinguished as speech, music, or noise. It is located within the temporal lobe of the brain, posterior to the primary auditory cortex. It is considered a part of the temporal cortex. It stores memories of sounds and permits perception of sounds. Wernicke's area (posterior part of the superior temporal gyrus) is connected to Broca's area via the arcuate fasciculus, a neural pathway, and to the visual cortex via the angular gyrus. It is the semantic processing center of the brain which plays a significant role in the conscious comprehension and interpretation of spoken words by both the listener and speaker. Words are not understood until they are processed by Wernicke’s area.
Primary auditory cortex. Located at the superior margin of the temporal lobe. Receives information related to pitch, rhythm and loudness.
Basal ganglia are large knots of nerve cells deep in the cerebrum. Structures contained in the basal ganglia include the amygdala, globus pallidus, and striatum (containing the caudate nucleus and the putamen). The basal ganglia (or basal nuclei) are interconnected with the cerebral cortex, thalamus and brainstem. They are associated with motor control and learning and participate in concert with the cortex in cognition and emotions. Parkinson's disease is an affliction of the basal ganglia. New (controversial) evidence suggests that these structures may also be involved in language processing.
“The Declarative/Procedural Model of Pinker, Ullman and colleagues claims that the basal ganglia are part of a fronto-striatal procedural memory system which applies grammatical rules to combine morphemes (the smallest meaningful units in language) into complex words (e.g. talk-ed, talk-ing). We tested this claim by investigating whether striatal damage or loss of its dopaminergic innervation is reliably associated with selective regular past tense deficits in patients with subcortical cerebrovascular damage, Parkinson’s disease or Huntington’s disease. We focused on past tense morphology since this allows us to contrast the regular past tense (jump-jumped), which is rule-based, with the irregular past tense (sleep-slept), which is not…. All patient groups showed normal activation of semantic and morphological representations in comprehension, despite difficulties suppressing semantically appropriate alternatives when trying to inflect novel verbs. This is consistent with previous reports that striatal dysfunction spares automatic activation of linguistic information, but disrupts later language processes that require inhibition of competing alternatives.
It seems more likely that neocortical regions are critical for this processing rather than the basal ganglia. Such a conclusion would be consistent with our recent finding that healthy volunteers show increased activation of the left inferior frontal gyrus and the left superior temporal gyrus when processing the regular past tense than irregular forms or words matched to past tense phonology (Marslen-Wilson et al., 2003)."
The basal ganglia and rule-governed language use: evidence from vascular and degenerative conditions by C. E. Longworth, S. E. Keenan, R. A. Barker, W. D. Marslen-Wilson and L. K. Tyler
(Comment: language processing is different from language learning).
“In the case of adult language learning, for example, proceduralized linguistic input may eventually be stored in the neocortex, but only after making a loop through the circuits of the basal ganglia… Immersion learning is slightly more procedural in nature, whereas classroom learning is slightly more declarative. Although in the past several decades, some teaching methods have endeavored to change this.”
The Neurobiology of Learning
By John H. Schumann, Sheila E. Crowell, Nancy E. Jones, Namhee Lee
The striatum is a subcortical part of the cerebrum. It is the major input station of the basal ganglia system and the part of the basal ganglia with the most complex shape.
"At a subcortical level, the main connections are located at the striatum. There is convincing evidence that highly automatized language skills are processed at this level. Agloti, Beltramello, Girardi, and Fabbro (1996) reported a case of aphasia where the patient had been bilingual in a Venetian dialect and standard Italian, two rather different languages. Her first and daily language was Venetian. However, after a stroke, she lost her Venetian language completely and was only able to speak standard Italian. It turned out that her brain damage was located at the subcortical level at striatum."
"Recent functional neuroimaging studies support a likely role for the dominant striatum in language, as activations were found during various different tasks such as speech, syntactic processing, lexical processing, word memorisation, word retrieval, and writing."
link
Comment: The brain learns in two different ways. One, called declarative learning, involves the medial temporal lobe and deals with learning active facts that can be recalled and used with great flexibility. Declarative learning and memory (also called explicit learning or, simply, memory) involves rapid learning, conscious recollection, and explicit declaration. It happens after language development. It is characterized by analytical, language-based, memory-dependent approach to acquiring and retaining knowledge. Distraction-free studying is more efficient and effective for this type of learning.
The second, involving the striatum, is called habit learning. There is convincing scientific evidence that highly automatized language skills are processed at this level.
The two types of learning compete with each other, and when someone is distracted habit learning takes over from declarative learning. One learns better particular habitual tasks while declarative learning suffers. The declarative and procedural memory systems interact both cooperatively and competitively in the acquisition and use of language.
And what is the outcome of successful language learning?
"Expert language learners attain a level of competence that is virtually comparable to that of native speakers (Coppieters 1987; Birdsong 1992,1997). The final state of these speakers is near-native – their “mental package” must look a lot like a native grammar since it includes phonological, morphological and syntactic rules. The documentation strongly suggests that these expert L2ers do indeed possess a grammar and not simply a collection of cognitive facts and strategies. Furthermore, while an intermediate L2 grammar appears to be less complete than the expert’s, it shares many of the characteristics of the superior L2er. Third, L2ers acquire knowledge that they are not taught and which does not transfer from the L1… This kind of evidence strongly supports the claim that knowledge of L2 is internally systematic, procedural and untaught; it is grammatical, not exclusively cognitive.
The documentation furnished by brain scans and neuro-electrical measurement complements and corroborates studies of language deprivation indicating a post-Critical Period loss of ability to acquire L1 morphosyntax (Curtiss 1988), but a persistent ability to acquire lexical items… It is evident that the neurological “etching” (Platzack 1996) of L1 must take place during the first five years of life for grammar to be really well established. If the L1 is so engraved, there is a grammar template for the L1 and for future additional languages. Child bilinguals have a double grammar in the Broca’s area of the brain, while adult bilinguals construct a second (L2) grammar separate from L1, BUT IN THE SAME APPROXIMATE REGION (Kim et al. 1997). Given the (often subtle) deficiencies of L2 grammars, one may infer that the L2 grammar is less deeply engraved. In contrast to grammatical knowledge, lexical information for both L1 and L2 is stored and accessed in a similar manner (Weber-Fox and Neville 1999)…. The L1 provides the template that permits the acquisition of L2, but ironically also interferes with that acquisition by the very depth of L1 neurological engraving…”
The second time around by Julia Rogers Herschensohn
An opposing view:
"Among the factors that typically lead to native-like proficiency in L2, aptitude, meaning the ability to learn explicitly, becomes one of the major variables. The fact that cognitive aptitude strongly correlates with success of L2 learning (Ehrman & Oxford, 1995) again suggests that high attainment in L2 is the result of learning rather than acquisition. All these factors are associated with learning performance in any knowledge domain subserved by declarative memory."
Declarative and Procedural Determinants of Second Languages by Michel Paradis
The Broca’s area is a region of the brain responsible for speech articulation. It controls the motor complex which is responsible for speech production. “For a long time, it was assumed that the role of Broca's area was more devoted to language production than language comprehension. However, recent evidence demonstrates that Broca's area also plays a significant role in language comprehension. Patients with lesions in Broca's area who exhibit agrammatical speech production also show inability to use syntactic information to determine the meaning of sentences.[4] Also, a number of neuroimaging studies have implicated an involvement of Broca's area, particularly of the pars opercularis of the left inferior frontal gyrus, during the processing of complex sentences.(Wikipedia)
“More recently, Broca's area has been implicated in music processing, leading some researchers to suggest music may be processed as a language. Imaging studies have revealed that professional musicians trained at an early age have an increased volume of gray matter in Broca's area. Broca's area is part of a language and music processing network that includes Wernicke's area, the superior temporal sulcus, Heschl's gyrus, planum polare, planum temporale, and the anterior superior insular cortices.” Link
(Comment: Broca's area is essential for producing language and mediating grammar. The Broca’s area of the brain of Emil Krebs was larger and organized differently from that of monolingual men. It is unclear whether this was from birth or whether it was due to language learning).
Superior temporal gyrus: "a gyrus in the upper part of the temporal lobe. Contains the primary auditory cortex. The anterior part of this region has been implicated in generating the aha! experience of insight.” Link
Averbia is a specific type of anomia in which the subject has trouble remembering only verbs. This is caused by damage to the frontal cortex, in or near Broca's area.
Brain imaging findings:
Thinking about words makes the Broca’s area light up.
Thinking about words and speaking generates widespread activity.
Inner speech – Broca’s area active
Word retrieval (lexical information) – Broca’s involved
Prosody (musical intonation of speech) – Broca’s involved
Preparing to speak – Broca’s area active
The superior temporal gyrus contains several important structures of the brain, including: Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of sound; Wernicke's area, Brodmann 22p, an important region for the processing of speech so that it can be understood as language. (Wikipedia)
The auditory association area (Wernicke's area, or area 22) is an important region for the processing of acoustic signals so that they can be distinguished as speech, music, or noise. It is located within the temporal lobe of the brain, posterior to the primary auditory cortex. It is considered a part of the temporal cortex. It stores memories of sounds and permits perception of sounds. Wernicke's area (posterior part of the superior temporal gyrus) is connected to Broca's area via the arcuate fasciculus, a neural pathway, and to the visual cortex via the angular gyrus. It is the semantic processing center of the brain which plays a significant role in the conscious comprehension and interpretation of spoken words by both the listener and speaker. Words are not understood until they are processed by Wernicke’s area.
Primary auditory cortex. Located at the superior margin of the temporal lobe. Receives information related to pitch, rhythm and loudness.
Basal ganglia are large knots of nerve cells deep in the cerebrum. Structures contained in the basal ganglia include the amygdala, globus pallidus, and striatum (containing the caudate nucleus and the putamen). The basal ganglia (or basal nuclei) are interconnected with the cerebral cortex, thalamus and brainstem. They are associated with motor control and learning and participate in concert with the cortex in cognition and emotions. Parkinson's disease is an affliction of the basal ganglia. New (controversial) evidence suggests that these structures may also be involved in language processing.
“The Declarative/Procedural Model of Pinker, Ullman and colleagues claims that the basal ganglia are part of a fronto-striatal procedural memory system which applies grammatical rules to combine morphemes (the smallest meaningful units in language) into complex words (e.g. talk-ed, talk-ing). We tested this claim by investigating whether striatal damage or loss of its dopaminergic innervation is reliably associated with selective regular past tense deficits in patients with subcortical cerebrovascular damage, Parkinson’s disease or Huntington’s disease. We focused on past tense morphology since this allows us to contrast the regular past tense (jump-jumped), which is rule-based, with the irregular past tense (sleep-slept), which is not…. All patient groups showed normal activation of semantic and morphological representations in comprehension, despite difficulties suppressing semantically appropriate alternatives when trying to inflect novel verbs. This is consistent with previous reports that striatal dysfunction spares automatic activation of linguistic information, but disrupts later language processes that require inhibition of competing alternatives.
It seems more likely that neocortical regions are critical for this processing rather than the basal ganglia. Such a conclusion would be consistent with our recent finding that healthy volunteers show increased activation of the left inferior frontal gyrus and the left superior temporal gyrus when processing the regular past tense than irregular forms or words matched to past tense phonology (Marslen-Wilson et al., 2003)."
The basal ganglia and rule-governed language use: evidence from vascular and degenerative conditions by C. E. Longworth, S. E. Keenan, R. A. Barker, W. D. Marslen-Wilson and L. K. Tyler
(Comment: language processing is different from language learning).
“In the case of adult language learning, for example, proceduralized linguistic input may eventually be stored in the neocortex, but only after making a loop through the circuits of the basal ganglia… Immersion learning is slightly more procedural in nature, whereas classroom learning is slightly more declarative. Although in the past several decades, some teaching methods have endeavored to change this.”
The Neurobiology of Learning
By John H. Schumann, Sheila E. Crowell, Nancy E. Jones, Namhee Lee
The striatum is a subcortical part of the cerebrum. It is the major input station of the basal ganglia system and the part of the basal ganglia with the most complex shape.
"At a subcortical level, the main connections are located at the striatum. There is convincing evidence that highly automatized language skills are processed at this level. Agloti, Beltramello, Girardi, and Fabbro (1996) reported a case of aphasia where the patient had been bilingual in a Venetian dialect and standard Italian, two rather different languages. Her first and daily language was Venetian. However, after a stroke, she lost her Venetian language completely and was only able to speak standard Italian. It turned out that her brain damage was located at the subcortical level at striatum."
"Recent functional neuroimaging studies support a likely role for the dominant striatum in language, as activations were found during various different tasks such as speech, syntactic processing, lexical processing, word memorisation, word retrieval, and writing."
link
Comment: The brain learns in two different ways. One, called declarative learning, involves the medial temporal lobe and deals with learning active facts that can be recalled and used with great flexibility. Declarative learning and memory (also called explicit learning or, simply, memory) involves rapid learning, conscious recollection, and explicit declaration. It happens after language development. It is characterized by analytical, language-based, memory-dependent approach to acquiring and retaining knowledge. Distraction-free studying is more efficient and effective for this type of learning.
The second, involving the striatum, is called habit learning. There is convincing scientific evidence that highly automatized language skills are processed at this level.
The two types of learning compete with each other, and when someone is distracted habit learning takes over from declarative learning. One learns better particular habitual tasks while declarative learning suffers. The declarative and procedural memory systems interact both cooperatively and competitively in the acquisition and use of language.
And what is the outcome of successful language learning?
"Expert language learners attain a level of competence that is virtually comparable to that of native speakers (Coppieters 1987; Birdsong 1992,1997). The final state of these speakers is near-native – their “mental package” must look a lot like a native grammar since it includes phonological, morphological and syntactic rules. The documentation strongly suggests that these expert L2ers do indeed possess a grammar and not simply a collection of cognitive facts and strategies. Furthermore, while an intermediate L2 grammar appears to be less complete than the expert’s, it shares many of the characteristics of the superior L2er. Third, L2ers acquire knowledge that they are not taught and which does not transfer from the L1… This kind of evidence strongly supports the claim that knowledge of L2 is internally systematic, procedural and untaught; it is grammatical, not exclusively cognitive.
The documentation furnished by brain scans and neuro-electrical measurement complements and corroborates studies of language deprivation indicating a post-Critical Period loss of ability to acquire L1 morphosyntax (Curtiss 1988), but a persistent ability to acquire lexical items… It is evident that the neurological “etching” (Platzack 1996) of L1 must take place during the first five years of life for grammar to be really well established. If the L1 is so engraved, there is a grammar template for the L1 and for future additional languages. Child bilinguals have a double grammar in the Broca’s area of the brain, while adult bilinguals construct a second (L2) grammar separate from L1, BUT IN THE SAME APPROXIMATE REGION (Kim et al. 1997). Given the (often subtle) deficiencies of L2 grammars, one may infer that the L2 grammar is less deeply engraved. In contrast to grammatical knowledge, lexical information for both L1 and L2 is stored and accessed in a similar manner (Weber-Fox and Neville 1999)…. The L1 provides the template that permits the acquisition of L2, but ironically also interferes with that acquisition by the very depth of L1 neurological engraving…”
The second time around by Julia Rogers Herschensohn
An opposing view:
"Among the factors that typically lead to native-like proficiency in L2, aptitude, meaning the ability to learn explicitly, becomes one of the major variables. The fact that cognitive aptitude strongly correlates with success of L2 learning (Ehrman & Oxford, 1995) again suggests that high attainment in L2 is the result of learning rather than acquisition. All these factors are associated with learning performance in any knowledge domain subserved by declarative memory."
Declarative and Procedural Determinants of Second Languages by Michel Paradis
Saturday, August 8, 2009
Fossilization, automatization and second language acquisition research
Excerpts from “The Neurobiology of Learning”
by John H. Schumann, Sheila E. Crowell, Nancy E. Jones, Namhee Lee
“Aphasic syndromes caused by BG (basal ganglia) lesions indicate what roles the BG may play in language functions. According to Fabbro (1999), basal ganglia aphasics develop symptoms such as reduced voice volume, foreign accent syndrome, perseveration (involuntary repetition of words, syllables etc.), and agrammatism. Additionally a polyglot’s more fluent language tends to be more seriously damaged than a less fluent language. The interesting fact that the patients’ second languages are better preserved may imply that their second languages are processed more by the declarative memory system. Although the automatization of a second language through the BD is ongoing, it may not be complete. When a patient suffers a BG lesion, the parts of the second language that have already been proceduralized wil be damaged, but other parts of SL that have not been proceduralized will be preserved. In contrast, a first language may have been almost completely proceduralized without leaving much of a trace in the declarative system. This may be why BG-lesion patients cannot produce their first language in spite of their intact declarative memory system...
Knowledge about the BG functions may have important implications for the area of linguistics in general and SLA in particular.
Learning fixed expressions: Chunking
Some researchers have noticed that second language learners tend to learn frequently co-occurring words and delexicalized chunks (Sinclair 1991; Tannen, 1989). This phenomenon may be explained by the chunking mechanism of the BG. Previously, we discussed how the BG participates in the process of chunking the cortically distributed information into a unitary sequence through convergence, divergence and reconvergence. Whenever a second language speaker uses one of the fixed expression, he or she may simply activate the relevant basal ganglia circuit so that he or she does not need to apply a grammar rule or a phonological rule step by step.
Automatization of Syntax and Phonology: DP and IP
Learning and producing the phonology and grammar of a target language probably involve both the direct pathway and the indirect pathway. Through numerous repetitive inputs of the target language and its production, a second language speaker may slowly build up stronger synapses among participating neurons in the cortex and basal ganglia, which represent the syntactic and phonological rules of the target language. Finaly the learner acquires the ability to execute the rules through the direct pathway of the BG. For example, the choice of word order may be the result of basal ganglia function.
Whenever a second language speaker utters a sentence, perhaps there may be two competing word orders in the speaker’s brain, one probably from his or her first language and another from the target language. When the speaker gets into the target language mode,the target language order may be executed through the direct pathway with the competing order being inhibited by the indirect pathway. Other aspects of grammar may be the same.
Phonology is likely to develop in the same way… As the learner improves his or her fluency in the target language through numerous repetitions in listening and speaking, he or she may acquire the ability to execute this rule through the direct pathway.
(Comment: Pronunciation does improve with language use, regardless of the initial silent period. Even after a long silent period the learner's pronunciation will suffer while he struggles to build sentences. However, unlike the early bird, he will always be able to rely on his good ear for the language. If one insists on speaking from the very start and without paying attention to phonology, automatization results in a lot of fossilized errors and fossilization is unfortunately the strongest and most difficult to eradicate at the phonological level. I’ll leave the silent period hypothesis aside. One also needs to consider the usefulness to effort ratio and the “permanent damage” theory.)
Formation of Rules of the Target Language
The formation of correct rules is often a difficult process. To form a correct rule, a speaker has to frequently execute the correct sentences related to the rule. However, a beginner cannot execute the correct sentence easily, and every time he or she executes an incorrect sentence, the wrong rule will be strengthened in the relevant neuronal circuits. A paradoxical situation is unavoidable here. The more often a beginner utters incorrect sentences, the stronger the neuronal circuits representing them may become. However, advanced second language speakers conform to the rules of the target language to a greater extent than beginners.”
(Comment: certainly a lot of wasted synapses)
“Fossilized language speakers have two important characteristics (Harley and Swain, 1978; Selinker, 1972). One is that they have already acquired a certain level of communicative fluidity. They can generate utterances in the target language without undue cognitive planning and without consciously building structures. They show less hesitation when engaged in conversation. In summary their speech has fluency. Another characteristic of fossilized second language speakers is that their learning has stopped or radically slowed down. Their typical utterance structures and phonology do not improve over time although they may be continuously exposed to the target language environment. They continue to make the same grammatical and phonological errors although they are sometimes aware that they are doing so.
(Comment: In second language acquisition the mighty brain is not always your friend. Message transmitted, message received - the path of least resistance. The native error correction mechanism also contributes negatively since the learner is not encouraged to reformulate his utterance. A possible workaround: a decent silent period, careful production).
“These two characteristics may be explained by BG functions and procedural memory. The first characteristic of fossilized second language speakers, natural fluidity, occurs because they have already acquired the target language procedurally, thus, they have obtained automaticity. By repetitive use of the target language, the speakers may have formed procedural memory of (incorrect) linguistic rules of the target language through the basal ganglia circuits. When one acquires a procedural memory of a motor or a cognitive skill, one can execute it automatically…
The other characteristic of the speakers, rigidity of errors, can also be explained with reference to the BG and procedural memory… Procedural memory is formed more slowly than declarative memory. The other side of the coin is that procedural memory is more robust so that, once formed, it is better preserved, and it is also inflexible, and therefore difficult to change. This is why it is so difficult to correct bad habits… If a fossilized second language speaker has already automatized the linguistic skills through basal ganglia circuits, the automatized skils are naturally resistant to correction and change.
An outstanding question is whether fossilized language can be defossilized… First, defossilization perhaps is possible. It is not too rare to meet fossilized speakers in a language classroom. (No kidding!).
This may be possible for two neurobiological reasons. First the brain is always plastic, although the extent of plasticity varies according to many factors. Because the brain maintains plasticity, it is not impossible to form a new rule or to correct an incorrect rule. Second, the anatomy of the brain shows that the procedural memory of the basal ganglia can be influenced by other components…. Dopamine (DA), which is involved motivational modulation of its targets, is very important in this system, projecting from the ventral…to the ventral striatum, the ventral pallidum and the dorsal striatum.
From experience, we all know that automatizing declarative knowledge or altering a habitual procedure is difficult and time-consuming. It requires practice and motivation to sustain that practice. Animals probably acquire declarative and procedural knowledge together as they experience the world. With humans, the symbolic species capable of language, it becomes possible to acquire declarative and procedural skills more separately. This type of learning requires cognitive work and the motivation to do that work. The task, of course, is facilitated by aptitude. From an evolutionary perspective, it is easy to understand why it may be difficult to alter motor procedures. Procedures are developed to help the organism thrive in the environment by allowing automatic responses to stimuli. If they were easily altered or disrupted, the animal’s survival would be threatened. Therefore, when a language learner develops incorrect grammatical structure, these habit-protecting difficulties are encountered, and considerable effort is required to develop the correct procedures to override the maladaptive fossilization”.
Krashen from neurobiological perspective
“Though Krashen himself did not attempt to make a biology-based argument, there are several possible biological assumptions inherent in his position. These are:
1 The areas of the brain involved in subconscious processes (acquisition) are different from those areas involved in conscious processes (learning). That is to say, declarative and nondecalrative learning are accomplished by different areas of the brain.
2 There are no connections between these two brain regions.
3. The declarative system cannot modulate activity in the nondeclarative system. In other words, practicing an explicitly learned rule over and over again will not help the learner to strengthen connections in areas of the brain responsible for proceduralization.
Currently, SLA theorists are moving away from Krashen’s noninterface position, and are taking the stance that rule acquisition in language is a complex cognitive task that lies on the same power function learning curve as other cognitive skills (DeKeyser, 1997). These researchers suggest that SLA is similar to the acquisition of most skills, which appear to involve interactions between the declarative and the nondeclarative memory systems (Berry, 1994, Ellis, 2000;MacWhinney, 1997). Elis, for example, discussed three likely ways in which implicit and explicit knowledge might be converted into implicit knowledge if the learner is at the right stage of linguistic development. Second, explicit knowledge may lead the learner to listen for a recently learned language structure in the input. Third, explicit knowledge might cause learners to notice differences between their own output and the output of native seakers (Ellis, 2000). These three points are not only borne out by observations of adult language learners, they are also true of the underlying biology. Perhaps only the first of the three points needs some revision based on the research presented in this book. Specifically, we would assert that knowledge that is stored declaratively is not converted into nondeclarative knowledge. Instead, learners acquire and store information in both declarative (hippocampus/cortex) loops and nondeclarative (basal ganglia/cortex) loops…
MacWhinney (1997) “cited a number of sources claiming that second language learning is facilitated by explicit instruction. However, he also suggested that implicit learing may still play an important rule in the acquisition of a second language. Further, MacWhinney suggested that explicit instruction may actually be harmful if the structures that are being taught are too complicated, irregular, or simplified to the point of being incorrect…
It is likely that there are significant individual differences in the number of cycles through the hippocampus that are needed for each student to learn a rule, as well as differences in the number of cycles required for different rules for the same student. While some students may immediately recognize the discrepancy between the two forms and rapidly begin producing target-like utterances, other students ay take weeks or even months to resolve this conflict. One reason for this difference is that each student’s brai has been shaped by idiosyncratice experiences. Furthermore, as was previously discussed, this difference between students may partially result from individual differences in the genes responsible for activating the transcription factors CREB and C/EBP.”
by John H. Schumann, Sheila E. Crowell, Nancy E. Jones, Namhee Lee
“Aphasic syndromes caused by BG (basal ganglia) lesions indicate what roles the BG may play in language functions. According to Fabbro (1999), basal ganglia aphasics develop symptoms such as reduced voice volume, foreign accent syndrome, perseveration (involuntary repetition of words, syllables etc.), and agrammatism. Additionally a polyglot’s more fluent language tends to be more seriously damaged than a less fluent language. The interesting fact that the patients’ second languages are better preserved may imply that their second languages are processed more by the declarative memory system. Although the automatization of a second language through the BD is ongoing, it may not be complete. When a patient suffers a BG lesion, the parts of the second language that have already been proceduralized wil be damaged, but other parts of SL that have not been proceduralized will be preserved. In contrast, a first language may have been almost completely proceduralized without leaving much of a trace in the declarative system. This may be why BG-lesion patients cannot produce their first language in spite of their intact declarative memory system...
Knowledge about the BG functions may have important implications for the area of linguistics in general and SLA in particular.
Learning fixed expressions: Chunking
Some researchers have noticed that second language learners tend to learn frequently co-occurring words and delexicalized chunks (Sinclair 1991; Tannen, 1989). This phenomenon may be explained by the chunking mechanism of the BG. Previously, we discussed how the BG participates in the process of chunking the cortically distributed information into a unitary sequence through convergence, divergence and reconvergence. Whenever a second language speaker uses one of the fixed expression, he or she may simply activate the relevant basal ganglia circuit so that he or she does not need to apply a grammar rule or a phonological rule step by step.
Automatization of Syntax and Phonology: DP and IP
Learning and producing the phonology and grammar of a target language probably involve both the direct pathway and the indirect pathway. Through numerous repetitive inputs of the target language and its production, a second language speaker may slowly build up stronger synapses among participating neurons in the cortex and basal ganglia, which represent the syntactic and phonological rules of the target language. Finaly the learner acquires the ability to execute the rules through the direct pathway of the BG. For example, the choice of word order may be the result of basal ganglia function.
Whenever a second language speaker utters a sentence, perhaps there may be two competing word orders in the speaker’s brain, one probably from his or her first language and another from the target language. When the speaker gets into the target language mode,the target language order may be executed through the direct pathway with the competing order being inhibited by the indirect pathway. Other aspects of grammar may be the same.
Phonology is likely to develop in the same way… As the learner improves his or her fluency in the target language through numerous repetitions in listening and speaking, he or she may acquire the ability to execute this rule through the direct pathway.
(Comment: Pronunciation does improve with language use, regardless of the initial silent period. Even after a long silent period the learner's pronunciation will suffer while he struggles to build sentences. However, unlike the early bird, he will always be able to rely on his good ear for the language. If one insists on speaking from the very start and without paying attention to phonology, automatization results in a lot of fossilized errors and fossilization is unfortunately the strongest and most difficult to eradicate at the phonological level. I’ll leave the silent period hypothesis aside. One also needs to consider the usefulness to effort ratio and the “permanent damage” theory.)
Formation of Rules of the Target Language
The formation of correct rules is often a difficult process. To form a correct rule, a speaker has to frequently execute the correct sentences related to the rule. However, a beginner cannot execute the correct sentence easily, and every time he or she executes an incorrect sentence, the wrong rule will be strengthened in the relevant neuronal circuits. A paradoxical situation is unavoidable here. The more often a beginner utters incorrect sentences, the stronger the neuronal circuits representing them may become. However, advanced second language speakers conform to the rules of the target language to a greater extent than beginners.”
(Comment: certainly a lot of wasted synapses)
“Fossilized language speakers have two important characteristics (Harley and Swain, 1978; Selinker, 1972). One is that they have already acquired a certain level of communicative fluidity. They can generate utterances in the target language without undue cognitive planning and without consciously building structures. They show less hesitation when engaged in conversation. In summary their speech has fluency. Another characteristic of fossilized second language speakers is that their learning has stopped or radically slowed down. Their typical utterance structures and phonology do not improve over time although they may be continuously exposed to the target language environment. They continue to make the same grammatical and phonological errors although they are sometimes aware that they are doing so.
(Comment: In second language acquisition the mighty brain is not always your friend. Message transmitted, message received - the path of least resistance. The native error correction mechanism also contributes negatively since the learner is not encouraged to reformulate his utterance. A possible workaround: a decent silent period, careful production).
“These two characteristics may be explained by BG functions and procedural memory. The first characteristic of fossilized second language speakers, natural fluidity, occurs because they have already acquired the target language procedurally, thus, they have obtained automaticity. By repetitive use of the target language, the speakers may have formed procedural memory of (incorrect) linguistic rules of the target language through the basal ganglia circuits. When one acquires a procedural memory of a motor or a cognitive skill, one can execute it automatically…
The other characteristic of the speakers, rigidity of errors, can also be explained with reference to the BG and procedural memory… Procedural memory is formed more slowly than declarative memory. The other side of the coin is that procedural memory is more robust so that, once formed, it is better preserved, and it is also inflexible, and therefore difficult to change. This is why it is so difficult to correct bad habits… If a fossilized second language speaker has already automatized the linguistic skills through basal ganglia circuits, the automatized skils are naturally resistant to correction and change.
An outstanding question is whether fossilized language can be defossilized… First, defossilization perhaps is possible. It is not too rare to meet fossilized speakers in a language classroom. (No kidding!).
This may be possible for two neurobiological reasons. First the brain is always plastic, although the extent of plasticity varies according to many factors. Because the brain maintains plasticity, it is not impossible to form a new rule or to correct an incorrect rule. Second, the anatomy of the brain shows that the procedural memory of the basal ganglia can be influenced by other components…. Dopamine (DA), which is involved motivational modulation of its targets, is very important in this system, projecting from the ventral…to the ventral striatum, the ventral pallidum and the dorsal striatum.
From experience, we all know that automatizing declarative knowledge or altering a habitual procedure is difficult and time-consuming. It requires practice and motivation to sustain that practice. Animals probably acquire declarative and procedural knowledge together as they experience the world. With humans, the symbolic species capable of language, it becomes possible to acquire declarative and procedural skills more separately. This type of learning requires cognitive work and the motivation to do that work. The task, of course, is facilitated by aptitude. From an evolutionary perspective, it is easy to understand why it may be difficult to alter motor procedures. Procedures are developed to help the organism thrive in the environment by allowing automatic responses to stimuli. If they were easily altered or disrupted, the animal’s survival would be threatened. Therefore, when a language learner develops incorrect grammatical structure, these habit-protecting difficulties are encountered, and considerable effort is required to develop the correct procedures to override the maladaptive fossilization”.
Krashen from neurobiological perspective
“Though Krashen himself did not attempt to make a biology-based argument, there are several possible biological assumptions inherent in his position. These are:
1 The areas of the brain involved in subconscious processes (acquisition) are different from those areas involved in conscious processes (learning). That is to say, declarative and nondecalrative learning are accomplished by different areas of the brain.
2 There are no connections between these two brain regions.
3. The declarative system cannot modulate activity in the nondeclarative system. In other words, practicing an explicitly learned rule over and over again will not help the learner to strengthen connections in areas of the brain responsible for proceduralization.
Currently, SLA theorists are moving away from Krashen’s noninterface position, and are taking the stance that rule acquisition in language is a complex cognitive task that lies on the same power function learning curve as other cognitive skills (DeKeyser, 1997). These researchers suggest that SLA is similar to the acquisition of most skills, which appear to involve interactions between the declarative and the nondeclarative memory systems (Berry, 1994, Ellis, 2000;MacWhinney, 1997). Elis, for example, discussed three likely ways in which implicit and explicit knowledge might be converted into implicit knowledge if the learner is at the right stage of linguistic development. Second, explicit knowledge may lead the learner to listen for a recently learned language structure in the input. Third, explicit knowledge might cause learners to notice differences between their own output and the output of native seakers (Ellis, 2000). These three points are not only borne out by observations of adult language learners, they are also true of the underlying biology. Perhaps only the first of the three points needs some revision based on the research presented in this book. Specifically, we would assert that knowledge that is stored declaratively is not converted into nondeclarative knowledge. Instead, learners acquire and store information in both declarative (hippocampus/cortex) loops and nondeclarative (basal ganglia/cortex) loops…
MacWhinney (1997) “cited a number of sources claiming that second language learning is facilitated by explicit instruction. However, he also suggested that implicit learing may still play an important rule in the acquisition of a second language. Further, MacWhinney suggested that explicit instruction may actually be harmful if the structures that are being taught are too complicated, irregular, or simplified to the point of being incorrect…
It is likely that there are significant individual differences in the number of cycles through the hippocampus that are needed for each student to learn a rule, as well as differences in the number of cycles required for different rules for the same student. While some students may immediately recognize the discrepancy between the two forms and rapidly begin producing target-like utterances, other students ay take weeks or even months to resolve this conflict. One reason for this difference is that each student’s brai has been shaped by idiosyncratice experiences. Furthermore, as was previously discussed, this difference between students may partially result from individual differences in the genes responsible for activating the transcription factors CREB and C/EBP.”
Ventral and dorsal pathways for language
The Two-Streams hypothesis
Introduction
The Two-Streams hypothesis is a widely accepted, but still controversial, account of visual processing. As visual information exits the occipital lobe, it follows two main channels, or "streams."
The ventral stream (also known as the "what pathway") is associated with object recognition and form representation. It has strong connections to the medial temporal lobe (which stores long-term memories), the limbic system (which controls emotions), and the dorsal stream (which deals with object locations and motion).
The dorsal stream (or, "where pathway") is involved in spatial awareness and guidance of actions (e.g., reaching). In this it has two distinct functional characteristics -it contains a detailed map of the visual field, and is also good at detecting and analyzing movements. The dorsal stream commences with purely visual functions in the occipital lobe before gradually transferring to spatial awareness at its termination in the parietal lobe. The posterior parietal cortex is essential for, "the perception and interpretation of spatial relationships, accurate body image, and the learning of tasks involving coordination of the body in space".
The dual stream model for language processing (a hot new hypothesis)
Abstract
Built on an analogy between the visual and auditory systems, the following dual stream model for language processing was suggested recently: a dorsal stream is involved in mapping sound to articulation, and a ventral stream in mapping sound to meaning. The goal of the study presented here was to test the neuroanatomical basis of this model. Combining functional magnetic resonance imaging (fMRI) with a novel diffusion tensor imaging (DTI)-based tractography method we were able to identify the most probable anatomical pathways connecting brain regions activated during two prototypical language tasks. Sublexical repetition of speech is subserved by a dorsal pathway, connecting the superior temporal lobe and premotor cortices in the frontal lobe via the arcuate and superior longitudinal fascicle. In contrast, higher-level language comprehension is mediated by a ventral pathway connecting the middle temporal lobe and the ventrolateral prefrontal cortex via the extreme capsule. Thus, according to our findings, the function of the dorsal route, traditionally considered to be the major language pathway, is mainly restricted to sensory-motor mapping of sound to articulation, whereas linguistic processing of sound to meaning requires temporofrontal interaction transmitted via the ventral route.
link
Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language
Résumé / Abstract
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of parity between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.
link
Seems to support Wernicke's 1874 language model
"...Wernicke proposed that sensory representations of speech ("auditory word images") interfaced with two distinct systems, the conceptual system, which he believed was broadly distributed throughout cortex, and the motor system located in the frontal lobe. The interface with the conceptual system supported comprehension of speech, whereas the interface with the motor system helped support the production of speech. Thus, one stream processes the meaning of sensory information (the "what" stream), while the other allows for interaction with the action system (the "how" stream). This is basically identical to what David and I have been claiming in terms of broad organization of our dual stream model..."
Talking Brains
"Experimental data suggest that the division between the visual ventral and dorsal pathways may indeed indicate that static and dynamical information is processed separately. Contrary to Hurford, it is suggested that the ventral pathway primarily generates representations of objects, whereas the dorsal pathway produces representations of events. The semantic object/event distinction may relate to the morpho-syntactic noun/verb distinction."
Markus Werning (2003). Ventral Versus Dorsal Pathway: The Source of the Semantic Object/Event and the Syntactic Noun/Verb Distinction? Behavioral and Brain Sciences 26 (3):299-300.
Introduction
The Two-Streams hypothesis is a widely accepted, but still controversial, account of visual processing. As visual information exits the occipital lobe, it follows two main channels, or "streams."
The ventral stream (also known as the "what pathway") is associated with object recognition and form representation. It has strong connections to the medial temporal lobe (which stores long-term memories), the limbic system (which controls emotions), and the dorsal stream (which deals with object locations and motion).
The dorsal stream (or, "where pathway") is involved in spatial awareness and guidance of actions (e.g., reaching). In this it has two distinct functional characteristics -it contains a detailed map of the visual field, and is also good at detecting and analyzing movements. The dorsal stream commences with purely visual functions in the occipital lobe before gradually transferring to spatial awareness at its termination in the parietal lobe. The posterior parietal cortex is essential for, "the perception and interpretation of spatial relationships, accurate body image, and the learning of tasks involving coordination of the body in space".
The dual stream model for language processing (a hot new hypothesis)
Abstract
Built on an analogy between the visual and auditory systems, the following dual stream model for language processing was suggested recently: a dorsal stream is involved in mapping sound to articulation, and a ventral stream in mapping sound to meaning. The goal of the study presented here was to test the neuroanatomical basis of this model. Combining functional magnetic resonance imaging (fMRI) with a novel diffusion tensor imaging (DTI)-based tractography method we were able to identify the most probable anatomical pathways connecting brain regions activated during two prototypical language tasks. Sublexical repetition of speech is subserved by a dorsal pathway, connecting the superior temporal lobe and premotor cortices in the frontal lobe via the arcuate and superior longitudinal fascicle. In contrast, higher-level language comprehension is mediated by a ventral pathway connecting the middle temporal lobe and the ventrolateral prefrontal cortex via the extreme capsule. Thus, according to our findings, the function of the dorsal route, traditionally considered to be the major language pathway, is mainly restricted to sensory-motor mapping of sound to articulation, whereas linguistic processing of sound to meaning requires temporofrontal interaction transmitted via the ventral route.
link
Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language
Résumé / Abstract
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of parity between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.
link
Seems to support Wernicke's 1874 language model
"...Wernicke proposed that sensory representations of speech ("auditory word images") interfaced with two distinct systems, the conceptual system, which he believed was broadly distributed throughout cortex, and the motor system located in the frontal lobe. The interface with the conceptual system supported comprehension of speech, whereas the interface with the motor system helped support the production of speech. Thus, one stream processes the meaning of sensory information (the "what" stream), while the other allows for interaction with the action system (the "how" stream). This is basically identical to what David and I have been claiming in terms of broad organization of our dual stream model..."
Talking Brains
"Experimental data suggest that the division between the visual ventral and dorsal pathways may indeed indicate that static and dynamical information is processed separately. Contrary to Hurford, it is suggested that the ventral pathway primarily generates representations of objects, whereas the dorsal pathway produces representations of events. The semantic object/event distinction may relate to the morpho-syntactic noun/verb distinction."
Markus Werning (2003). Ventral Versus Dorsal Pathway: The Source of the Semantic Object/Event and the Syntactic Noun/Verb Distinction? Behavioral and Brain Sciences 26 (3):299-300.
Thursday, August 6, 2009
Newborn Brain May Be Wired for Speech
Newborn Brain May Be Wired for Speech
By Faith Hickman Brynie
About Faith Hickman Brynie
July 07, 2008
The long, enthusiastic debate about whether the brain is hardwired for language gets a boost now and then, most recently from the release several months ago of a book claiming we are hardwired to, among other things, curse. Continuing research suggests that even though newborns cannot speak or understand language, our brains may indeed be built for language from birth or even before.
“From the first weeks of life the human brain is particularly adapted for processing speech,” says French researcher Ghislaine Dehaene-Lambertz, director of the cognitive neuroimaging research center at the Institut National de la Santé de la Recherche Médicale. Infants’ language learning and processing rely largely on the same brain circuits that adults use, she says.
Studies employing optical topography, a technique that assesses oxygen use in the brain, have shown activity in left-hemisphere speech centers in newborns as young as 2 to 5 days. Marcela Peña of the International School for Advanced Studies in Italy and colleagues found that left-hemisphere activity was greater when the babies hear normal speech than when they heard silence or speech played backward, according to a study published in the Proceedings of the National Academy of Sciences in 2003.
Other behavioral experiments have demonstrated that days- or weeks-old infants can distinguish the “melody” of their native language from the pitches and rhythms of other languages, and that infants can assess the number of syllables in a word and detect a change in speech sounds (such as ba versus ga), even when they hear different speakers.
In 2002 Dehaene-Lambertz’s team used functional magnetic resonance imaging (fMRI) to monitor brain activity while 3-month-old infants listened to 20-second blocks of speech played forward and backward. With forward speech, the same brain regions that adults use for language were active in the babies, with a strong preference for the left hemisphere.
Additional activation in parts of the right frontal cortex was seen in infants who listened to normal speech. The activity occurred in the same brain areas that become active when adults retrieve verbal information from memory.
The French team also found a significant preference for the native language in the babies’ left angular gyrus, an area with increased activity when adults hear words but not nonsense syllables.
In 2006 Dehaene-Lambertz again used fMRI to measure cerebral activity in 3-month-olds who heard short sentences spoken in their native language.
The infants recognized a repeated sentence even after a 14-second interval of silence. The scans showed adultlike activity in the upper region of the brain’s left temporal lobe. The fastest responses were recorded near the auditory cortex, where sounds are first processed in the brain.
Responses slowed down toward the back of the language-processing region and in Broca’s area in the left hemisphere. Activity in that area increased when a sentence was repeated, suggesting that infants may be using a memory system based on Broca’s area just as adults do. These results, reported in the Proceedings of the National Academy of Sciences, demonstrate that the precursors of adult cortical language areas are already working in infants even before the time when babbling begins, Dehaene-Lambertz says.
She offers two possible explanations for these findings. Perhaps certain brain regions are genetically and developmentally “programmed” for language at birth, or even before. Or perhaps these regions are sensitive only to sound or to any rapidly changing sound.
“We do not know yet whether another structured stimulus, such as music, would activate the same network,” Dehaene-Lambertz says. “However, we can say that the processing abilities of an infant’s brain make it efficiently adapted to the most frequent auditory input: speech.”
Edith Kaan, a linguist at the University of Florida, says that researchers are currently studying whether the developing brain handles speech sounds in a different way from other sounds. They also hope to discover how brain regions specialize as children learn to make and understand words, phrases and sentences.
“Eventually, this research may help us understand what capacities are inborn for learning language,” Kaan says. “We may also learn which functions are unique to language and language development, and which are shared with other cognitive activities such as attention, working memory and pattern recognition.”
link
By Faith Hickman Brynie
About Faith Hickman Brynie
July 07, 2008
The long, enthusiastic debate about whether the brain is hardwired for language gets a boost now and then, most recently from the release several months ago of a book claiming we are hardwired to, among other things, curse. Continuing research suggests that even though newborns cannot speak or understand language, our brains may indeed be built for language from birth or even before.
“From the first weeks of life the human brain is particularly adapted for processing speech,” says French researcher Ghislaine Dehaene-Lambertz, director of the cognitive neuroimaging research center at the Institut National de la Santé de la Recherche Médicale. Infants’ language learning and processing rely largely on the same brain circuits that adults use, she says.
Studies employing optical topography, a technique that assesses oxygen use in the brain, have shown activity in left-hemisphere speech centers in newborns as young as 2 to 5 days. Marcela Peña of the International School for Advanced Studies in Italy and colleagues found that left-hemisphere activity was greater when the babies hear normal speech than when they heard silence or speech played backward, according to a study published in the Proceedings of the National Academy of Sciences in 2003.
Other behavioral experiments have demonstrated that days- or weeks-old infants can distinguish the “melody” of their native language from the pitches and rhythms of other languages, and that infants can assess the number of syllables in a word and detect a change in speech sounds (such as ba versus ga), even when they hear different speakers.
In 2002 Dehaene-Lambertz’s team used functional magnetic resonance imaging (fMRI) to monitor brain activity while 3-month-old infants listened to 20-second blocks of speech played forward and backward. With forward speech, the same brain regions that adults use for language were active in the babies, with a strong preference for the left hemisphere.
Additional activation in parts of the right frontal cortex was seen in infants who listened to normal speech. The activity occurred in the same brain areas that become active when adults retrieve verbal information from memory.
The French team also found a significant preference for the native language in the babies’ left angular gyrus, an area with increased activity when adults hear words but not nonsense syllables.
In 2006 Dehaene-Lambertz again used fMRI to measure cerebral activity in 3-month-olds who heard short sentences spoken in their native language.
The infants recognized a repeated sentence even after a 14-second interval of silence. The scans showed adultlike activity in the upper region of the brain’s left temporal lobe. The fastest responses were recorded near the auditory cortex, where sounds are first processed in the brain.
Responses slowed down toward the back of the language-processing region and in Broca’s area in the left hemisphere. Activity in that area increased when a sentence was repeated, suggesting that infants may be using a memory system based on Broca’s area just as adults do. These results, reported in the Proceedings of the National Academy of Sciences, demonstrate that the precursors of adult cortical language areas are already working in infants even before the time when babbling begins, Dehaene-Lambertz says.
She offers two possible explanations for these findings. Perhaps certain brain regions are genetically and developmentally “programmed” for language at birth, or even before. Or perhaps these regions are sensitive only to sound or to any rapidly changing sound.
“We do not know yet whether another structured stimulus, such as music, would activate the same network,” Dehaene-Lambertz says. “However, we can say that the processing abilities of an infant’s brain make it efficiently adapted to the most frequent auditory input: speech.”
Edith Kaan, a linguist at the University of Florida, says that researchers are currently studying whether the developing brain handles speech sounds in a different way from other sounds. They also hope to discover how brain regions specialize as children learn to make and understand words, phrases and sentences.
“Eventually, this research may help us understand what capacities are inborn for learning language,” Kaan says. “We may also learn which functions are unique to language and language development, and which are shared with other cognitive activities such as attention, working memory and pattern recognition.”
link
Sunday, August 2, 2009
Learning Languages in Your Pajamas, Eating Captain Crunch
I sort of feel better about my own experience. This guy kicks ass, literally and language learning-wise (in his pajamas, eating Captain Crunch)
Learning Languages in Your Pajamas, Eating Captain Crunch
October 15, 2008
by Antonio Graceffo
It was a Saturday morning, and I did what I had done every Saturday since I could remember. I got up early, put on my favorite sweatpants (I had outgrown my Batman pajamas), and made myself a huge bowl of captain Crunch, which I had bought at the PX of the nearby US Army base. I went into the TV room of our dormitory, and I spent the next several hours watching cartoons: “Die Retter Der Erde”, “Die Simpsons”, and “Die Familie Feuerstein.” Around twelve o’clock, I ran back to my room, during a commercial, and made a stack of peanut butter and jelly sandwiches, with the crusts cut off. Accompanied by a glass of chocolate milk I ate my sandwiches while watching shows for big people, like “Raumschiff Enterprise”, “Ein Käfig voller Helden”, and “Unbekannte Dimensionen.”
I watched till I thought my retinas would burnout. It was a struggle, but I knew this was the price I would have to pay if I wanted to learn German.
After only nine months of German language study in the US, I had earned a place as an exchange student at the Department of Applied Linguistics, University of Mainz, located in Germersheim, Germany.
Of the roughly 2,300 students, about 20% were foreign; that is non-German. We all had to choose a three language combination and majored in either translation or interpretation. To even be admitted to the program, Germans had to demonstrate competence in English and French, as well as German, regardless of which language combination they planned to study. So, in some cases, students passed the French and English entrance requirements but then studied Russian and Dutch, giving them five languages. Foreign students had to pass the PNDS, which is the German equivalent of the TOEFL or IELTS, a difficult exam which proves a foreign speaker’s competence in German.
In short, my classmates were the absolute cream of the crop. As a rule, the poorer the country they came from, the more competent they were, because they were required to jump through more hoops to get there. Many of the Africans and Eastern Europeans had already graduated, in some cases they already possessed a PHD in their home country, but came to Germany to obtain a degree which would be accepted everywhere.
In my case, as an exchange student, I skipped all of those entrance requirements. At the end of my exchange semester, when it came time to register for the next semesters classes, I was already in, so, I registered as a regular student. By exploiting this loophole, I stayed at the university for nearly four years without ever having passed a single entrance requirement.
Needless to say, with only nine months of German, I was way behind my classmates. The first day of classes, my head felt like it was splitting. By the third day of attending lectures, I thought I would die. I was doing well to pick out the odd word here and there. There was no way I was going to learn anything by going to more classes. Giving up on school, and consequently on myself, I limped back to the dorm, grabbed some comfort food, and flipped on the TV.
I was watching “Feivel, der Mauswanderer,” a Disney film with the original title of, “An American Tail.” Feivel was the mouse’s name in English. Maus was mouse, but why wanderer? Then it hit me, the German word for immigrant is Auswanderer. So, mouse wander was a cute play on words, meaning the “mouse immigrant.”
I thought that was pretty cute, so I kept watching. Before I knew it, night had come, and I was still glued to the TV. I wasn’t understanding everything, in fact, I probably understood less than 20%, but I knew that I was learning. So, the next morning, instead of going back to the university, the site of my defeat, I stayed home and watched TV. I set up a rigid schedule for myself of watching TV and working out (to burn off the Captain Crunch) and I stuck to it. Over the next several weeks, I saw my listening and speaking grow by leaps and bounds.
Occasionally German students would come in the TV room and criticized me for watching so much TV.
“Be quiet!” I yelled. “I am studying.”
One day, taking a break from my dedicated TV viewing, I walked into a bookstore. Germans are prodigious readers, and they have some of the best bookstores in the world. I stood in the center of the shop, looking at all of those wonderful books on the shelves, thinking, someday, I will be able to walk into this shop, take any book of the shelf, and read it. At the moment, however, it seemed an impossible dream. While I was standing there, one book caught my eye, “Der mit dem Wolf tanzt,” (Dances with Wolves). I don’t know why I was so drawn to the book, but I used some of my food money to buy it.
I took it back to the dorm and it took me a whole day to read about three pages, using a dictionary. This really ate into my TV time, so I abandoned the dictionary and just made a new schedule of reading for so many hours, without looking anything up, and watching TV for so many hours.
Once again, the same Germans who had seen me limp out of the university with my tail between my legs asked, “Do you understand everything in that book?”
“No,” I answered, without hesitation, “But I will the fifth time I read it.”
“That book is not so serious,” said one of the countless German girls named Sabina. “Don’t you think you should be reading technical texts about linguistics?”
“If I can’t understand a book with a picture of Indians fighting on the cover, how am I going to understand a technical book?” I countered.
“Don’t you think you should read German literature, by German authors?”
“I don’t even understand German literature when it has been translated into English. I will stick with my novelized movie book.”
“But that was written for housewives!” shouted Sabina.
“GERMAN housewives,” I pointed out. And at that point, I would have been satisfied with being able to read as well as a German housewife.
Reading “Dances With Wolves,” instead of a “real” German novel made sense to me. I knew the story, the context, the history, it was all tangible for me. Only the language was new. And that was what I sought to learn. It made perfect sense to me.
I didn’t know it at the time, but my TV viewing and my novel reading without a dictionary were part of a language acquisition method called “The Core Novel method” developed by a brilliant Hungarian polyglot named Kato Lombo.
Lomb Kato (her personal name) was considered, by Hungarians, to be the greatest living polyglot implemented the “core Novel method. Basically she chose a novel she loved to read, found a copy in the foreign language she wanted to learn, and worked through it.
Dr. Lombo said that when she set out to chose a language and a novel, she asked these questions: “How much am I interested in it? What do I want with it? What does it mean for me? What good is it for me?”
It just seems so incredibly sensible to me that Dr. Lombo was essentially saying, allow the learner to choose a language and study materials that have meaning for him or her, and to choose those stories that he was interested in. I liked the story of “Dances with Wolves.” I related to the main character, who is living in another culture, so different from his own. Quite often, in Germany, I considered writing a book, entitled “Dances with Translators.”
I cared what happened on the next page and I wanted to learn the language, simply because I wanted to read faster.
Interestingly, Dr. Lombo also suggested not using a dictionary while reading this foreign book. If there was a word or phrase, which repeated and was clearly pivotal to understanding the book, but you haven’t figured it out by the fourth or fifth viewing, then you could reach for a dictionary. But, from my own experience, using the dictionary, the story had no joy, no relevance, and no flow. I could neither follow nor remember the story. Once I abandoned the dictionary I found the story flowed. I just read and read. Where I understood, great, where I didn’t understand, also great. Words and phrases that made no sense on page twenty came to life on page eighty.
My next book was “The Body Guard,” then “Dracula.” Next, I was in France at a street market and saw a very compelling book about a kid growing up in war time Germany. My French reading level was quite bad, but so strong was my desire to read the book that I bought it anyway. Upon returning to Germany I brought the book to a book shop where they helped me find the German language version. It was the fourth book I read in German and the first where I had no idea of the story before reading.
In addition to reading, I kept up with my TV watching. In Germany, TV is dubbed. Unlike terrible dubbing employed in the old Russia, where a single guy reads all of the parts, they have excellent, professional-quality dubbing in Germany. Famous American stars, such as Robert Deniro or Arnold Schwarzeneger, had their own official dubber. So, from movie to movie, their voices remained the same. I would watch “The Godfather,” “Simpsons,” “Star Trek,” — anything I enjoyed watching I watched again in German. German students would come in the TV room and ask me “Did you understand all of that.”
“No.”
“You shouldn’t watch that.”
“Why, are you going to ship me off to a camp?” Sometimes I actually said things like this as a way of getting Germans to leave me alone. Sometimes, I felt like practicing my speaking, so I continued the argument. It was like a free German conversation lesson, the cost of which was a little anger.
“Aren’t you worried that you don’t understand everything?” asked the German.
“Why? Do we have a test?”
“You shouldn’t be watching TV and reading things you don’t understand.”
“But if I only read things I understand, I won’t learn anything. Besides, it would be really boring because I would only be reading children’s books.”
“But ‘The Simpsons’ is a cartoon. Cartoons are for children.”
“Don’t say, that!” Like all delusional people, I became aggressive when my delusions were challenged. “‘The Simpsons’ is more than a cartoon. It is a way of life.”
A huge advantage of reading novels or watching TV is that you get relatively real dialogue. Yes, we don’t all speak exactly like Clint Eastwood in “Ein Mann Sieht Rot”, but none of us speak the way people do in dialogue 23 of the average language textbook. Why do all language textbooks have dialogues about renting hotel rooms or going to market and buying vegetables? These aren’t discussions I would ever have with a native speaker. These are things I just don’t do all of that often. But watching “The Godfather” I learned all of the vocabulary necessary to live as a Mafia don. This is something I have aspired to for years anyway. And now, I am qualified to do the job in two languages.
Fast forward more years than I care to count, and I am in Taiwan, studying Chinese. My Taiwanese friends, the ones who are dedicated students of English constantly read books about English language: books on idioms or gender biases in grammar exercises…. They never just sit down and read a book. As a native speaker, you have most likely never sat down and read an entire book about the English language. But you have probably read, enjoyed and learned from literature written in English.
In school curriculums, language learners, if they read literature at all, are subjected to Mark Twain, “Charlotte’s Web,” and often Shakespeare. These are terrible choices for people who want to learn language. Mark Twain is brilliant, but the dialect makes it hard for low-level learners to read. Do we really want a bunch of Taiwanese kids talking like Riverboat Jim? Shakespeare is the least logical thing to have kids read in a first language classroom. Why on earth would we make them read it in an English learning environment? Kids in Taiwan love baseball. Why not have them read a biography of Babe Ruth?
In my English language classroom I show the kids videos, such as “Mulan” and “Kung Fu Panda.” The context is Chinese, and the stories are familiar. Mulan, for example is an ancient Chinese legend, which the kids had all read in Chinese, before seeing the Disney movie. For myself, I use these and other Disney cartoons to practice Chinese listening. Disney DVDs are equipped with a language switch, so you can choose English or Chinese, complete with same language subtitles.
Reading real German books or watching real German TV would require knowledge of the culture, history, and geography. By using American movies and books, I knew who the bad guy was without anyone telling me. In German I wouldn’t have a clue. For example, when I was in Spain, parents were telling me they didn’t let their kids watch the Bill Cosby show because the children were disrespectful toward their parents. This was amazing because in the States, Cosby was considered a family show, which parents encouraged kids to watch.
When Germans saw “Rocky One” they said things like, “But he did not win. So he is not good.” They missed the point entirely. As I imagined I would miss the point entirely in a German movie I stuck with what I knew.
I once tried watching a Chinese movie, and when I asked who the bad guy was, the Chinese all looked at me like I was nuts. “Didn’t you see the opening scene? General Tsao walked in backwards. Clearly he was in defeat.”
Of course! How could I have failed to pick up on that culturally universal reference?
Eventually, to truly know a language, you will also need to master the culture. So I would eventually have to start watching German, or now, Chinese movies, but one thing at a time.
Now that I am in Taiwan, learning Chinese, there is absolutely no way that I foresee myself changing my tastes and desires to a point that I would enjoy or even understand Taiwanese TV shows. The culture is just so vastly different. For this reason, to do my listening practice I watch Disney movies such as “Mulan” or “The Incredibles,” which have been dubbed into Chinese.
This type of viewing, and the corresponding reading, is a good way to get started, but obviously it has its pitfalls as well.
An American guy in Taiwan — call him Richard — chose not to learn Chinese characters. Instead, he mastered the reading of Bu Pu Mu Fu, a phonetic script used for teaching reading to Chinese children. We all learn it, as we are learning Chinese. The thought is, however, that you would eventually transition into learning real Chinese characters. Richard, like many foreigners, decided characters were just too hard. So, he reads books in Bu Pu Mu Fu as a way of improving his general Chinese fluency. The problem, however is that only children’s books are written in this alphabet.
“Now, I am as fluent as a five year old.” Richard told me. “I don’t know how to move forward.”
The answer seems to be that no matter what language you wish to be fluent at, you will eventually need to learn the writing system and read original literature targeted at college educated adults, if you wish to be as clever in your foreign language as a college educated adult. And that means a lot of work, no matter what language you are dealing with.
Fortunately for me, I am not at that point yet in Chinese. So, I can just watch Cartoon Network, and let the learning seep in.
link
About the author's Chinese experience and other thoughts on language learning also read
Immersion Sandwich and a Side of Rice
Pushing the Conversation
English is not a Foreign Language
Insane Polyglots: Their brains are just different
Translation vs. Natural Language Acquisition
Activating Your Foreign Language
About Antonio Graceffo
link
Learning Languages in Your Pajamas, Eating Captain Crunch
October 15, 2008
by Antonio Graceffo
It was a Saturday morning, and I did what I had done every Saturday since I could remember. I got up early, put on my favorite sweatpants (I had outgrown my Batman pajamas), and made myself a huge bowl of captain Crunch, which I had bought at the PX of the nearby US Army base. I went into the TV room of our dormitory, and I spent the next several hours watching cartoons: “Die Retter Der Erde”, “Die Simpsons”, and “Die Familie Feuerstein.” Around twelve o’clock, I ran back to my room, during a commercial, and made a stack of peanut butter and jelly sandwiches, with the crusts cut off. Accompanied by a glass of chocolate milk I ate my sandwiches while watching shows for big people, like “Raumschiff Enterprise”, “Ein Käfig voller Helden”, and “Unbekannte Dimensionen.”
I watched till I thought my retinas would burnout. It was a struggle, but I knew this was the price I would have to pay if I wanted to learn German.
After only nine months of German language study in the US, I had earned a place as an exchange student at the Department of Applied Linguistics, University of Mainz, located in Germersheim, Germany.
Of the roughly 2,300 students, about 20% were foreign; that is non-German. We all had to choose a three language combination and majored in either translation or interpretation. To even be admitted to the program, Germans had to demonstrate competence in English and French, as well as German, regardless of which language combination they planned to study. So, in some cases, students passed the French and English entrance requirements but then studied Russian and Dutch, giving them five languages. Foreign students had to pass the PNDS, which is the German equivalent of the TOEFL or IELTS, a difficult exam which proves a foreign speaker’s competence in German.
In short, my classmates were the absolute cream of the crop. As a rule, the poorer the country they came from, the more competent they were, because they were required to jump through more hoops to get there. Many of the Africans and Eastern Europeans had already graduated, in some cases they already possessed a PHD in their home country, but came to Germany to obtain a degree which would be accepted everywhere.
In my case, as an exchange student, I skipped all of those entrance requirements. At the end of my exchange semester, when it came time to register for the next semesters classes, I was already in, so, I registered as a regular student. By exploiting this loophole, I stayed at the university for nearly four years without ever having passed a single entrance requirement.
Needless to say, with only nine months of German, I was way behind my classmates. The first day of classes, my head felt like it was splitting. By the third day of attending lectures, I thought I would die. I was doing well to pick out the odd word here and there. There was no way I was going to learn anything by going to more classes. Giving up on school, and consequently on myself, I limped back to the dorm, grabbed some comfort food, and flipped on the TV.
I was watching “Feivel, der Mauswanderer,” a Disney film with the original title of, “An American Tail.” Feivel was the mouse’s name in English. Maus was mouse, but why wanderer? Then it hit me, the German word for immigrant is Auswanderer. So, mouse wander was a cute play on words, meaning the “mouse immigrant.”
I thought that was pretty cute, so I kept watching. Before I knew it, night had come, and I was still glued to the TV. I wasn’t understanding everything, in fact, I probably understood less than 20%, but I knew that I was learning. So, the next morning, instead of going back to the university, the site of my defeat, I stayed home and watched TV. I set up a rigid schedule for myself of watching TV and working out (to burn off the Captain Crunch) and I stuck to it. Over the next several weeks, I saw my listening and speaking grow by leaps and bounds.
Occasionally German students would come in the TV room and criticized me for watching so much TV.
“Be quiet!” I yelled. “I am studying.”
One day, taking a break from my dedicated TV viewing, I walked into a bookstore. Germans are prodigious readers, and they have some of the best bookstores in the world. I stood in the center of the shop, looking at all of those wonderful books on the shelves, thinking, someday, I will be able to walk into this shop, take any book of the shelf, and read it. At the moment, however, it seemed an impossible dream. While I was standing there, one book caught my eye, “Der mit dem Wolf tanzt,” (Dances with Wolves). I don’t know why I was so drawn to the book, but I used some of my food money to buy it.
I took it back to the dorm and it took me a whole day to read about three pages, using a dictionary. This really ate into my TV time, so I abandoned the dictionary and just made a new schedule of reading for so many hours, without looking anything up, and watching TV for so many hours.
Once again, the same Germans who had seen me limp out of the university with my tail between my legs asked, “Do you understand everything in that book?”
“No,” I answered, without hesitation, “But I will the fifth time I read it.”
“That book is not so serious,” said one of the countless German girls named Sabina. “Don’t you think you should be reading technical texts about linguistics?”
“If I can’t understand a book with a picture of Indians fighting on the cover, how am I going to understand a technical book?” I countered.
“Don’t you think you should read German literature, by German authors?”
“I don’t even understand German literature when it has been translated into English. I will stick with my novelized movie book.”
“But that was written for housewives!” shouted Sabina.
“GERMAN housewives,” I pointed out. And at that point, I would have been satisfied with being able to read as well as a German housewife.
Reading “Dances With Wolves,” instead of a “real” German novel made sense to me. I knew the story, the context, the history, it was all tangible for me. Only the language was new. And that was what I sought to learn. It made perfect sense to me.
I didn’t know it at the time, but my TV viewing and my novel reading without a dictionary were part of a language acquisition method called “The Core Novel method” developed by a brilliant Hungarian polyglot named Kato Lombo.
Lomb Kato (her personal name) was considered, by Hungarians, to be the greatest living polyglot implemented the “core Novel method. Basically she chose a novel she loved to read, found a copy in the foreign language she wanted to learn, and worked through it.
Dr. Lombo said that when she set out to chose a language and a novel, she asked these questions: “How much am I interested in it? What do I want with it? What does it mean for me? What good is it for me?”
It just seems so incredibly sensible to me that Dr. Lombo was essentially saying, allow the learner to choose a language and study materials that have meaning for him or her, and to choose those stories that he was interested in. I liked the story of “Dances with Wolves.” I related to the main character, who is living in another culture, so different from his own. Quite often, in Germany, I considered writing a book, entitled “Dances with Translators.”
I cared what happened on the next page and I wanted to learn the language, simply because I wanted to read faster.
Interestingly, Dr. Lombo also suggested not using a dictionary while reading this foreign book. If there was a word or phrase, which repeated and was clearly pivotal to understanding the book, but you haven’t figured it out by the fourth or fifth viewing, then you could reach for a dictionary. But, from my own experience, using the dictionary, the story had no joy, no relevance, and no flow. I could neither follow nor remember the story. Once I abandoned the dictionary I found the story flowed. I just read and read. Where I understood, great, where I didn’t understand, also great. Words and phrases that made no sense on page twenty came to life on page eighty.
My next book was “The Body Guard,” then “Dracula.” Next, I was in France at a street market and saw a very compelling book about a kid growing up in war time Germany. My French reading level was quite bad, but so strong was my desire to read the book that I bought it anyway. Upon returning to Germany I brought the book to a book shop where they helped me find the German language version. It was the fourth book I read in German and the first where I had no idea of the story before reading.
In addition to reading, I kept up with my TV watching. In Germany, TV is dubbed. Unlike terrible dubbing employed in the old Russia, where a single guy reads all of the parts, they have excellent, professional-quality dubbing in Germany. Famous American stars, such as Robert Deniro or Arnold Schwarzeneger, had their own official dubber. So, from movie to movie, their voices remained the same. I would watch “The Godfather,” “Simpsons,” “Star Trek,” — anything I enjoyed watching I watched again in German. German students would come in the TV room and ask me “Did you understand all of that.”
“No.”
“You shouldn’t watch that.”
“Why, are you going to ship me off to a camp?” Sometimes I actually said things like this as a way of getting Germans to leave me alone. Sometimes, I felt like practicing my speaking, so I continued the argument. It was like a free German conversation lesson, the cost of which was a little anger.
“Aren’t you worried that you don’t understand everything?” asked the German.
“Why? Do we have a test?”
“You shouldn’t be watching TV and reading things you don’t understand.”
“But if I only read things I understand, I won’t learn anything. Besides, it would be really boring because I would only be reading children’s books.”
“But ‘The Simpsons’ is a cartoon. Cartoons are for children.”
“Don’t say, that!” Like all delusional people, I became aggressive when my delusions were challenged. “‘The Simpsons’ is more than a cartoon. It is a way of life.”
A huge advantage of reading novels or watching TV is that you get relatively real dialogue. Yes, we don’t all speak exactly like Clint Eastwood in “Ein Mann Sieht Rot”, but none of us speak the way people do in dialogue 23 of the average language textbook. Why do all language textbooks have dialogues about renting hotel rooms or going to market and buying vegetables? These aren’t discussions I would ever have with a native speaker. These are things I just don’t do all of that often. But watching “The Godfather” I learned all of the vocabulary necessary to live as a Mafia don. This is something I have aspired to for years anyway. And now, I am qualified to do the job in two languages.
Fast forward more years than I care to count, and I am in Taiwan, studying Chinese. My Taiwanese friends, the ones who are dedicated students of English constantly read books about English language: books on idioms or gender biases in grammar exercises…. They never just sit down and read a book. As a native speaker, you have most likely never sat down and read an entire book about the English language. But you have probably read, enjoyed and learned from literature written in English.
In school curriculums, language learners, if they read literature at all, are subjected to Mark Twain, “Charlotte’s Web,” and often Shakespeare. These are terrible choices for people who want to learn language. Mark Twain is brilliant, but the dialect makes it hard for low-level learners to read. Do we really want a bunch of Taiwanese kids talking like Riverboat Jim? Shakespeare is the least logical thing to have kids read in a first language classroom. Why on earth would we make them read it in an English learning environment? Kids in Taiwan love baseball. Why not have them read a biography of Babe Ruth?
In my English language classroom I show the kids videos, such as “Mulan” and “Kung Fu Panda.” The context is Chinese, and the stories are familiar. Mulan, for example is an ancient Chinese legend, which the kids had all read in Chinese, before seeing the Disney movie. For myself, I use these and other Disney cartoons to practice Chinese listening. Disney DVDs are equipped with a language switch, so you can choose English or Chinese, complete with same language subtitles.
Reading real German books or watching real German TV would require knowledge of the culture, history, and geography. By using American movies and books, I knew who the bad guy was without anyone telling me. In German I wouldn’t have a clue. For example, when I was in Spain, parents were telling me they didn’t let their kids watch the Bill Cosby show because the children were disrespectful toward their parents. This was amazing because in the States, Cosby was considered a family show, which parents encouraged kids to watch.
When Germans saw “Rocky One” they said things like, “But he did not win. So he is not good.” They missed the point entirely. As I imagined I would miss the point entirely in a German movie I stuck with what I knew.
I once tried watching a Chinese movie, and when I asked who the bad guy was, the Chinese all looked at me like I was nuts. “Didn’t you see the opening scene? General Tsao walked in backwards. Clearly he was in defeat.”
Of course! How could I have failed to pick up on that culturally universal reference?
Eventually, to truly know a language, you will also need to master the culture. So I would eventually have to start watching German, or now, Chinese movies, but one thing at a time.
Now that I am in Taiwan, learning Chinese, there is absolutely no way that I foresee myself changing my tastes and desires to a point that I would enjoy or even understand Taiwanese TV shows. The culture is just so vastly different. For this reason, to do my listening practice I watch Disney movies such as “Mulan” or “The Incredibles,” which have been dubbed into Chinese.
This type of viewing, and the corresponding reading, is a good way to get started, but obviously it has its pitfalls as well.
An American guy in Taiwan — call him Richard — chose not to learn Chinese characters. Instead, he mastered the reading of Bu Pu Mu Fu, a phonetic script used for teaching reading to Chinese children. We all learn it, as we are learning Chinese. The thought is, however, that you would eventually transition into learning real Chinese characters. Richard, like many foreigners, decided characters were just too hard. So, he reads books in Bu Pu Mu Fu as a way of improving his general Chinese fluency. The problem, however is that only children’s books are written in this alphabet.
“Now, I am as fluent as a five year old.” Richard told me. “I don’t know how to move forward.”
The answer seems to be that no matter what language you wish to be fluent at, you will eventually need to learn the writing system and read original literature targeted at college educated adults, if you wish to be as clever in your foreign language as a college educated adult. And that means a lot of work, no matter what language you are dealing with.
Fortunately for me, I am not at that point yet in Chinese. So, I can just watch Cartoon Network, and let the learning seep in.
link
About the author's Chinese experience and other thoughts on language learning also read
Immersion Sandwich and a Side of Rice
Pushing the Conversation
English is not a Foreign Language
Insane Polyglots: Their brains are just different
Translation vs. Natural Language Acquisition
Activating Your Foreign Language
About Antonio Graceffo
link
Learning by Viewing: Cartoons as Foreign Language Learning Material for Children
Learning by Viewing: Cartoons as Foreign Language Learning Material for Children--A Case Study
"Presents a case study of a six-year-old Finnish girl who learned a foreign language by watching English language cartoons on video, without formal teaching or contact with native speakers. Topics addressed include television versus video; sentence structure; rate of speech; repetition; and learning by viewing versus naturalistic language learning."
It turns out that she was...
"able to use English creatively, and that her skills in the areas of speaking and understanding spoken English were outstanding. She seemed to have been able to acquire the English grammar and an almost native-like pronunciation of ... English, mastering many sounds that are often problematic for Finnish learners of English.
The results of the Bilingual Syntax Measure (BSM) indicated clearly that she could be placed in the fifth, ie the highest level of proficiency, ..."
link
Jylha-Laide (1994) described a case in which a young girl from Finland learned English by repeatedly watching cartoons. Jylha-Laide (1994) said that certain aspects of cartoons may make them easier to learn language from:
(1) the cartoons contain features that effectively capture the viewer-learner’s attention,
(2) they present a strong picture-word interconnection, which corresponds with the ‘here and now’ principle of ‘modified’ registers,
(3) the dialogue of the cartoons is characterised [sic] by sentences that are simple and complete,
(4) the dialogue contains very few disfluencies,
(5) repetition is used frequently, and
(6) the rate of speech is relatively low in some cartoons. (n.p.)
The author asserted that “Laura’s case proves that even a beginning language learner may benefit from viewing ‘ordinary’ television programmes...
D’Ydewalle and Van de Poel (1999) also looked at language learning from video.
In their study of children’s abilities to learn foreign language incidentally from media programs, d’Ydewalle and Van de Poel (1999) found “real but limited foreign-language acquisition by children watching a subtitled movie” (p. 242). The researchers performed an experiment in which Dutch-speaking children from age 8 to 12 watched a film with either a foreign language written in the subtitles with a native soundtrack, or the reverse.
Those who heard the foreign language in the soundtrack acquired more of that new
language. This is an area with limited research.
Speaking of Maya & Miguel: The production and representation of Spanish language in an animated series for children
JOURNAL OF SPANISH LANGUAGE MEDIA, Vol. 2 2009, p. 106
Elements of Effective Educational TV
Television’s impact on viewers has been of concern since the flickering blue box began its insidious trickle into every room in our homes. For some, the seemingly passive way in which viewers interacted with the medium led to conclusions that television was a threat to intellectual development (Postman, 1982; Winn, 1985). To others, the concern was that viewing was replacing more cerebral pursuits (Dorr, 1986). And the subject matter, often violent or persuasive, was anticipated to be negatively impacting the social development of children (John, 1999; Kunkel, 2001; Smith, Wilson, Kunkel, Linz, Potter, Colvin et al., 1998)...
Today, there is a large body of research which suggests that those who believe all forms of programming are dangerous, numbing our minds and wasting our time (Mander, 1978; Postman, 1982), may be failing to distinguish between a multitude of variables that alter the relationship between medium and viewer. The most important of which may be content...
In fact, it may be that TV viewing is curvilinearly correlated to academic achievement for low SES students, with the positive impact turning negative at approximately 4 hours of viewing per day.
Unfortunately, the results of most of these studies appear to be confounded by variables not assessed (Hornik, 1981): socioeconomic status (SES), IQ, parental control, individual motivation, all must be considered in order to clearly indicate an effect (p. 196)1.
Although much of the early research investigating television’s potential impact on children demonstrated that heavy viewing led to a hindrance in language development more recent research appears to indicate the more likely relationship lies with the quality of content viewed rather than simply with the time spent in front of the set.
By recognizing the characteristics of a child’s cognitive processing we can better understand how that child might comprehend and learn from media. Children in what Piaget referred to as the Pre-operational stage (approximately between the ages of 2 and 7) learn very differently than older children. They use symbols to represent objects and if these objects are moving the pre-operational child believes them to be alive and have human consciousness. At this age children have difficulty conceptualizing time. They are influenced by fantasy and assume that everyone sees the world from their point of view. They are linear thinkers and so the temporal order of a television program’s storyline is very important as young children may be unable to fill in the blanks or relate to flashbacks. The pre-operational child needs to follow a task right to completion and, because they have low retention, often enjoys seeing things and hearing stories over and over again simply because they cannot remember having seen it in the past. As the repetitions occur the child is beginning to recognize elements she has seen before, she is developing mastery, and mastery makes the story just that much more fun.
Animate inanimate objects
Fantasy is fun
Keep the story in a logical order (no flashback)
Ensure the story comes to its logical conclusion
Repetition leads to mastery
Children who are learning to read must first come to understand the association between letters and words. Similarly, before children can be efficient at comprehending the messages of television they must first come to understand some rules about the forms it takes (Huston & Wright, 1983). These “forms” or “formal features” of television are invisible to most experienced viewers; they include the editing techniques of both the picture and the sound; camera moves (tilts, pans, and zooms); the musical score and pacing of the show. These features provide meaning to the experience of watching (Calvert, Huston, Watkins, & Wright, 1982, Campbell, Wright, & Huston, 1987). They are the means by which information is conveyed, and therefore affect how that information is processed (Neuman, 1995). They denote content to which attention should be paid. It is the manner in which these features are utilized that enables children to make sense of what they are watching. As they become experienced at using and understanding television’s format, children are then capable of a deeper processing of the televised information (Neuman, 1995; Salomon, 1979)...
Others investigated auditory monitoring of the television along with visual attention (Rolandelli et al., 1991) and determined that children may still be listening to the show even when not looking directly at the TV. In their research, Lorch and Castle (1997) maintained that children’s engagement with the content increased as the length of their looks at the television screen increased. The longer they look the more they’re engaged. The more they’re engaged the more chance of teaching them something!
For most pre-schoolers the cue that material is likely to be interesting and comprehensible to them often comes in the form of 4:
The voice of a child
The voice of a woman
The voice of one of the program’s characters
Lively music
Wacky sound effects
Jylha-Laide (1994) suggests this learning while viewing might be due to body language. Just as a teacher in a classroom makes actions to support instruction, so often do the characters on TV. It is this implicit information, the linking of words and actions, which may make language learning particularly synergistic to the medium of television...
Rice and Woodsmall (1988) found that repetitions are critical in children’s ability to learn words from television. When children see something they already know or understand, they approach the repeated exposure to it with less discomfort. In her research on closed-captioning, Linebarger (2001) found exact repetition of words on screen lead to increased word recognition. Anderson and his colleagues (2000) found that repeated exposure to a program allowed for viewer’s increased comprehension. However, they also determined that a substantial amount of the content has been learned after viewing the episode just one time.
Repetition also is a key element in enabling a child to transfer learning from one situation to another. Fisch (2001) suggests that presenting the same educational material in several different forms and in different contexts throughout the length of a television program might help children transfer what they have learned to new but similar situations (see alsoSalomon & Perkins, 1989). Anderson et al. (2000) found that multiple viewing of the same episode of Blue’s Clues significantly increased transfer. Children in their study who watched an episode five times were more likely to use the strategies they observed during the Blue’s Clues episode when they were presented with new problems than were children who had not seen the episode multiple times.
link
"Presents a case study of a six-year-old Finnish girl who learned a foreign language by watching English language cartoons on video, without formal teaching or contact with native speakers. Topics addressed include television versus video; sentence structure; rate of speech; repetition; and learning by viewing versus naturalistic language learning."
It turns out that she was...
"able to use English creatively, and that her skills in the areas of speaking and understanding spoken English were outstanding. She seemed to have been able to acquire the English grammar and an almost native-like pronunciation of ... English, mastering many sounds that are often problematic for Finnish learners of English.
The results of the Bilingual Syntax Measure (BSM) indicated clearly that she could be placed in the fifth, ie the highest level of proficiency, ..."
link
Jylha-Laide (1994) described a case in which a young girl from Finland learned English by repeatedly watching cartoons. Jylha-Laide (1994) said that certain aspects of cartoons may make them easier to learn language from:
(1) the cartoons contain features that effectively capture the viewer-learner’s attention,
(2) they present a strong picture-word interconnection, which corresponds with the ‘here and now’ principle of ‘modified’ registers,
(3) the dialogue of the cartoons is characterised [sic] by sentences that are simple and complete,
(4) the dialogue contains very few disfluencies,
(5) repetition is used frequently, and
(6) the rate of speech is relatively low in some cartoons. (n.p.)
The author asserted that “Laura’s case proves that even a beginning language learner may benefit from viewing ‘ordinary’ television programmes...
D’Ydewalle and Van de Poel (1999) also looked at language learning from video.
In their study of children’s abilities to learn foreign language incidentally from media programs, d’Ydewalle and Van de Poel (1999) found “real but limited foreign-language acquisition by children watching a subtitled movie” (p. 242). The researchers performed an experiment in which Dutch-speaking children from age 8 to 12 watched a film with either a foreign language written in the subtitles with a native soundtrack, or the reverse.
Those who heard the foreign language in the soundtrack acquired more of that new
language. This is an area with limited research.
Speaking of Maya & Miguel: The production and representation of Spanish language in an animated series for children
JOURNAL OF SPANISH LANGUAGE MEDIA, Vol. 2 2009, p. 106
Elements of Effective Educational TV
Television’s impact on viewers has been of concern since the flickering blue box began its insidious trickle into every room in our homes. For some, the seemingly passive way in which viewers interacted with the medium led to conclusions that television was a threat to intellectual development (Postman, 1982; Winn, 1985). To others, the concern was that viewing was replacing more cerebral pursuits (Dorr, 1986). And the subject matter, often violent or persuasive, was anticipated to be negatively impacting the social development of children (John, 1999; Kunkel, 2001; Smith, Wilson, Kunkel, Linz, Potter, Colvin et al., 1998)...
Today, there is a large body of research which suggests that those who believe all forms of programming are dangerous, numbing our minds and wasting our time (Mander, 1978; Postman, 1982), may be failing to distinguish between a multitude of variables that alter the relationship between medium and viewer. The most important of which may be content...
In fact, it may be that TV viewing is curvilinearly correlated to academic achievement for low SES students, with the positive impact turning negative at approximately 4 hours of viewing per day.
Unfortunately, the results of most of these studies appear to be confounded by variables not assessed (Hornik, 1981): socioeconomic status (SES), IQ, parental control, individual motivation, all must be considered in order to clearly indicate an effect (p. 196)1.
Although much of the early research investigating television’s potential impact on children demonstrated that heavy viewing led to a hindrance in language development more recent research appears to indicate the more likely relationship lies with the quality of content viewed rather than simply with the time spent in front of the set.
By recognizing the characteristics of a child’s cognitive processing we can better understand how that child might comprehend and learn from media. Children in what Piaget referred to as the Pre-operational stage (approximately between the ages of 2 and 7) learn very differently than older children. They use symbols to represent objects and if these objects are moving the pre-operational child believes them to be alive and have human consciousness. At this age children have difficulty conceptualizing time. They are influenced by fantasy and assume that everyone sees the world from their point of view. They are linear thinkers and so the temporal order of a television program’s storyline is very important as young children may be unable to fill in the blanks or relate to flashbacks. The pre-operational child needs to follow a task right to completion and, because they have low retention, often enjoys seeing things and hearing stories over and over again simply because they cannot remember having seen it in the past. As the repetitions occur the child is beginning to recognize elements she has seen before, she is developing mastery, and mastery makes the story just that much more fun.
Animate inanimate objects
Fantasy is fun
Keep the story in a logical order (no flashback)
Ensure the story comes to its logical conclusion
Repetition leads to mastery
Children who are learning to read must first come to understand the association between letters and words. Similarly, before children can be efficient at comprehending the messages of television they must first come to understand some rules about the forms it takes (Huston & Wright, 1983). These “forms” or “formal features” of television are invisible to most experienced viewers; they include the editing techniques of both the picture and the sound; camera moves (tilts, pans, and zooms); the musical score and pacing of the show. These features provide meaning to the experience of watching (Calvert, Huston, Watkins, & Wright, 1982, Campbell, Wright, & Huston, 1987). They are the means by which information is conveyed, and therefore affect how that information is processed (Neuman, 1995). They denote content to which attention should be paid. It is the manner in which these features are utilized that enables children to make sense of what they are watching. As they become experienced at using and understanding television’s format, children are then capable of a deeper processing of the televised information (Neuman, 1995; Salomon, 1979)...
Others investigated auditory monitoring of the television along with visual attention (Rolandelli et al., 1991) and determined that children may still be listening to the show even when not looking directly at the TV. In their research, Lorch and Castle (1997) maintained that children’s engagement with the content increased as the length of their looks at the television screen increased. The longer they look the more they’re engaged. The more they’re engaged the more chance of teaching them something!
For most pre-schoolers the cue that material is likely to be interesting and comprehensible to them often comes in the form of 4:
The voice of a child
The voice of a woman
The voice of one of the program’s characters
Lively music
Wacky sound effects
Jylha-Laide (1994) suggests this learning while viewing might be due to body language. Just as a teacher in a classroom makes actions to support instruction, so often do the characters on TV. It is this implicit information, the linking of words and actions, which may make language learning particularly synergistic to the medium of television...
Rice and Woodsmall (1988) found that repetitions are critical in children’s ability to learn words from television. When children see something they already know or understand, they approach the repeated exposure to it with less discomfort. In her research on closed-captioning, Linebarger (2001) found exact repetition of words on screen lead to increased word recognition. Anderson and his colleagues (2000) found that repeated exposure to a program allowed for viewer’s increased comprehension. However, they also determined that a substantial amount of the content has been learned after viewing the episode just one time.
Repetition also is a key element in enabling a child to transfer learning from one situation to another. Fisch (2001) suggests that presenting the same educational material in several different forms and in different contexts throughout the length of a television program might help children transfer what they have learned to new but similar situations (see alsoSalomon & Perkins, 1989). Anderson et al. (2000) found that multiple viewing of the same episode of Blue’s Clues significantly increased transfer. Children in their study who watched an episode five times were more likely to use the strategies they observed during the Blue’s Clues episode when they were presented with new problems than were children who had not seen the episode multiple times.
link
How the Brain Makes Way for a Second Language
How the Brain Makes Way for a Second Language
"Studies involving sophisticated brain imaging technologies called functional magnetic resonance imaging, fMRI, have also revealed some intriguing patterns in the way our brains process first and second languages.
Joy Hirsch and her colleagues at Cornell University used fMRI to determine how multiple languages are represented in the human brain. They found that native and second languages are spatially separated in Broca's area, which is a region in the frontal lobe of the brain that is responsible for the motor parts of language-movement of the mouth, tongue, and palate. In contrast, the two languages show very little separation in the activation of Wernicke's area, an area of the brain in the posterior part of the temporal lobe, which is responsible for comprehension of language.
The fMRI studies suggest that the difficulty adult learners of a second language may have is not with understanding the words of the second language, but with the motor skills of forming the words with the mouth and tongue. This may explain why learners of a second language can oftentimes comprehend a question asked in the new language, but are not always able to form a quick response.
Thus, for adult English language learners, techniques that emphasize speaking may be more successful than methods that focus more on reading and listening. For example, rather than lecturing to a class about vocabulary and grammar, an instructor perhaps should encourage her adult students to have conversations in English, or to act out short skits incorporating the day's lesson, which would more closely link the students' abilities to understand and speak the new language. Speaking would thus equal understanding.
The Cornell researchers also studied the brains of people who were bilingual from a very early age. Presumably, this group of people is able to speak the two languages as easily as they can comprehend both languages spoken to them. The researchers found that these subjects showed no spatial separation in either Broca's or Wernicke's areas for the two languages, indicating that in terms of brain activation at least, the same regions of the brain controlled their ability to process both languages.
The idea that second languages learned early in childhood are not separately processed in the brain is supported by fMRI studies of brain development in children. Researchers at UCLA report that the language areas of the brain seem to go through the most dynamic period of growth between the ages of 6 and 13. In contrast to the "first three years" idea of child development that has received so much press in the past few years, the UCLA study instead suggests that the elementary and middle school years are the biologically most advantageous times for acquisition of a second language.
These various neuroscience studies tell us that the brain is a remarkably plastic entity. A combination of listening and vocalization seems to be the most biologically advantageous method of acquiring a second language for both adults and children. Incorporating what we know about the way the brain processes language into the way languages are taught will benefit not only students who want to learn English, but also all those who wish to extend their linguistic range."
link
"Studies involving sophisticated brain imaging technologies called functional magnetic resonance imaging, fMRI, have also revealed some intriguing patterns in the way our brains process first and second languages.
Joy Hirsch and her colleagues at Cornell University used fMRI to determine how multiple languages are represented in the human brain. They found that native and second languages are spatially separated in Broca's area, which is a region in the frontal lobe of the brain that is responsible for the motor parts of language-movement of the mouth, tongue, and palate. In contrast, the two languages show very little separation in the activation of Wernicke's area, an area of the brain in the posterior part of the temporal lobe, which is responsible for comprehension of language.
The fMRI studies suggest that the difficulty adult learners of a second language may have is not with understanding the words of the second language, but with the motor skills of forming the words with the mouth and tongue. This may explain why learners of a second language can oftentimes comprehend a question asked in the new language, but are not always able to form a quick response.
Thus, for adult English language learners, techniques that emphasize speaking may be more successful than methods that focus more on reading and listening. For example, rather than lecturing to a class about vocabulary and grammar, an instructor perhaps should encourage her adult students to have conversations in English, or to act out short skits incorporating the day's lesson, which would more closely link the students' abilities to understand and speak the new language. Speaking would thus equal understanding.
The Cornell researchers also studied the brains of people who were bilingual from a very early age. Presumably, this group of people is able to speak the two languages as easily as they can comprehend both languages spoken to them. The researchers found that these subjects showed no spatial separation in either Broca's or Wernicke's areas for the two languages, indicating that in terms of brain activation at least, the same regions of the brain controlled their ability to process both languages.
The idea that second languages learned early in childhood are not separately processed in the brain is supported by fMRI studies of brain development in children. Researchers at UCLA report that the language areas of the brain seem to go through the most dynamic period of growth between the ages of 6 and 13. In contrast to the "first three years" idea of child development that has received so much press in the past few years, the UCLA study instead suggests that the elementary and middle school years are the biologically most advantageous times for acquisition of a second language.
These various neuroscience studies tell us that the brain is a remarkably plastic entity. A combination of listening and vocalization seems to be the most biologically advantageous method of acquiring a second language for both adults and children. Incorporating what we know about the way the brain processes language into the way languages are taught will benefit not only students who want to learn English, but also all those who wish to extend their linguistic range."
link
lexical approach
"The lexical approach is a way of analysing and teaching language based on the idea that it is made up of lexical units rather than grammatical structures. The units are words, chunks formed by collocations, and fixed phrases."
Source: TeachingEnglish
Source: TeachingEnglish
Lexical chunk
"A lexical chunk is a group of words that are commonly found together. Lexical chunks include collocations but these usually just involve content words, not grammar."
Source: TeachingEnglish
See also: Lexical Chunks Offer Insight Into Culture
Hanna Kryszewska
HLT Mag Year 8; Issue 3; May 06
(link)
More on lexical approach:
"The lexical approach to second language teaching has received interest in recent years as an alternative to grammar-based approaches. The lexical approach concentrates on developing learners' proficiency with lexis, or words and word combinations. It is based on the idea that an important part of language acquisition is the ability to comprehend and produce lexical phrases as unanalyzed wholes, or "chunks," and that these chunks become the raw data by which learners perceive patterns of language traditionally thought of as grammar (Lewis, 1993, p. 95). Instruction focuses on relatively fixed expressions that occur frequently in spoken language, such as, "I'm sorry," "I didn't mean to make you jump," or "That will never happen to me," rather than on originally created sentences (Lewis, 1997a, p. 212). This digest provides an overview of the methodological foundations underlying the lexical approach and the pedagogical implications suggested by them."
Source: Lexical Approach to Second Language Teaching. Eric Digest.
Author: Moudraia, Olga
"According to Lewis (1997, 2000) native speakers carry a pool of hundreds of thousands, and possibly millions, of lexical chunks in their heads ready to draw upon in order to produce fluent, accurate and meaningful language.
Language is not learnt by learning individual sounds and structures and then combining them, but by an increasing ability to break down wholes into parts.
Grammar is acquired by a process of observation, hypothesis and experiment.
We can use whole phrases without understanding their constituent parts.
Acquisition is accelerated by contact with a sympathetic interlocutor with a higher level of competence in the target language.
Schmitt : 'the mind stores and processes these [lexical] chunks as individual wholes.' The mind is able to store large amounts of information in long term memory but its short term capacity is much more limited, when producing language in speech for example, so it is much more efficient for the brain to recall a chunk of language as if it were one piece of information. 'Figment of his imagination' is, therefore, recalled as one piece of information rather than four separate words.
The basic principle of the lexical approach is: "Language is grammaticalised lexis, not lexicalised grammar"(Lewis 1993). In other words, lexis is central in creating meaning, grammar plays a subservient managerial role. If you accept this principle then the logical implication is that we should spend more time helping learners develop their stock of phrases, and less time on grammatical structures.
Chris: Carlos tells me Naomi fancies him.
Ivor:: It's just a figment of his imagination.
Has Ivor accessed 'figment' and 'imagination' from his vocabulary store and then accessed the structure: it+to be+ adverb + article + noun + of + possessive adjective + noun from the grammar store? Or is it more likely that Ivor has accessed the whole chunk in one go?
Tomlinson (2003) sums up the principles, objectives and procedures of a language awareness approach as:
'Paying deliberate attention to features of language in use can help learners to notice the gap between their own performance in the target language and the performance of proficient users of the language.
Noticing can give salience to a feature, so that it becomes more noticeable in future input, so contributing to the learner's psychological readiness to acquire that feature."
Source: TeachingEnglish "Lexical Approach"
by
Carlos Islam, The University of Maine
Ivor Timmis, Leeds Metropolitan University
Lexical Approach 1
Lexical Approach 2
Chinese and English Infants' Tone Perception
Chinese and English Infants' Tone Perception: Evidence for Perceptual Reorganization
Authors: Karen Mattock; Denis Burnham
Over half the world's population speaks a tone language, yet infant speech perception research has typically focused on consonants and vowels. Very young infants can discriminate a wide range of native and nonnative consonants and vowels, and then in a process of perceptual reorganization over the 1st year, discrimination of most nonnative speech sounds deteriorates. We investigated perceptual reorganization for tones by testing 6- and 9-month-old infants from tone (Chinese) and nontone (English) language environments for speech (lexical tone) and nonspeech (violin sound) tone discrimination in both cross-sectional and longitudinal studies. Overall, Chinese infants performed equally well at 6 and 9 months for both speech and nonspeech tone discrimination. Conversely, English infants' discrimination of lexical tone declined between 6 and 9 months of age, whereas their nonspeech tone discrimination remained constant. These results indicate that the reorganization of tone perception is a function of the native language environment, and that this reorganization is linguistically based.
link
Authors: Karen Mattock; Denis Burnham
Over half the world's population speaks a tone language, yet infant speech perception research has typically focused on consonants and vowels. Very young infants can discriminate a wide range of native and nonnative consonants and vowels, and then in a process of perceptual reorganization over the 1st year, discrimination of most nonnative speech sounds deteriorates. We investigated perceptual reorganization for tones by testing 6- and 9-month-old infants from tone (Chinese) and nontone (English) language environments for speech (lexical tone) and nonspeech (violin sound) tone discrimination in both cross-sectional and longitudinal studies. Overall, Chinese infants performed equally well at 6 and 9 months for both speech and nonspeech tone discrimination. Conversely, English infants' discrimination of lexical tone declined between 6 and 9 months of age, whereas their nonspeech tone discrimination remained constant. These results indicate that the reorganization of tone perception is a function of the native language environment, and that this reorganization is linguistically based.
link
Language Learning in the Real World for Non-beginners
"Language Learning in the Real World for Non-beginners"
by Greg Thomson
1.1. Key principles of design for an ongoing language learning program
Language learning is at once complex and simple. When I think of the complexity of language learning, I'm amazed that people succeed. As a linguist, I have spent much of my life puzzling over the complexities of language, and I feel I still understand so very little about any language. Yet people do learn new languages, not only as children, but also as adolescents and as adults. Observing that process only increases my sense of wonder. People learn far more than they are aware that they are learning. How do they do it?
Fortunately, the bulk of the complexity of language learning is handled by your brain, without your even being aware of it. You simply need to give your brain the right opportunity, and it takes over from there. That is where language learning becomes simple. "Giving your brain the right opportunity" can be boiled down to three principles which are easy to grasp, easy to remember and easy to apply:
-- Principle I: Expose yourself to massive comprehensible input. That is, expose yourself to massive doses of speech (and perhaps writing) that you can understand, while gradually increasing the difficulty level.
-- Principle II: Engage in extensive extemporaneous speaking. That is, engage in extensive two-way conversational interaction, and other speaking and writing activities.
-- Principle III: Learn to know the people whose language you are learning. That is, learn all you can about their lives, experiences, and beliefs. Do this in and through the language.
...
Your language learning experience can be divided into four phases. As I say, during the first weeks of your language learning, you were able to understand speech provided it was well supported by pictures, objects or actions. For example, if you were learning English, and I merely told you, "The bump in the middle of my face is my nose", with my hands folded in my lap, and a blank expression on my face, you would not have had a clue what I was saying. But if I pointed at my nose, and said "This is my nose", and then pointed at my mouth and said "This is my mouth", and then at my ear and said, "This is my ear", and then back at my nose and said, "This is my nose," there would have been a good chance you would understand the meaning of "This is my nose", etc. That is because the meaning of what you heard would be made clear by what you saw. In the same way you would quickly come to be able to understand simple descriptions of pictures. That's life in Stage I.
...
Even though you are now beyond Stage I, you will still find that, other things being equal, it is easier to understand someone's description of a picture if you can see the picture than if you can't. That would even be true if you were listening to your mother tongue, but it is much more the case when you are listening to a language that you are still learning. In the case of your mother tongue, even when you can't see a picture that is being described, you can clearly recognize the words that the speaker is using, and understand the spoken sentences in a general way.
During Stage II, you can understand speech if the content is fairly predictable. The main contribution of pictures during Stage I was that they made the content of what was being said partly predictable. But, in listening to statements about pictures, you were typically hearing only single sentences, or at best short sequences of sentences. Assuming you now have developed some skill in understanding isolated sentences and short sequences of sentences, you need to start working on learning to understand longer sequences of sentences. However, in order for you to understand long sequences of sentences at Stage II, the content still needs somehow to be predictable. Here is a simple example of how that is possible. Consider the story of Goldilocks. If you grew up in the English speaking world, you probably know this story well. At the beginning of Stage II you can have someone tell you the story of Goldilocks in your new language, and to your delight, you will find that you can follow what is being said with good understanding of most sentences right as they are spoken. And so you are indeed able to follow a long sequence of sentences with good understanding. You have thus moved from understanding isolated sentences and short sequences of sentences to understanding long sequences of connected sentences. We will have more to say below regarding ways to do this.
At Stage II then, you are able to understand long sequences of sentences provided the content is fairly predictable. Getting comprehensible input at this stage may mean continuing to expose yourself to speech which is supported by pictures, objects or actions, but it can also mean exposing yourself to a large amount of speech which has this property of predictability, as illustrated by the story of Goldilocks.
Since, in addition to the language, the culture and local history is also new to you, there will be many topics which are common, familiar topics to all native speakers of the language, but which are unfamiliar topics for you. Even fairly straightforward accounts of recent events may baffle you because you are unfamiliar with the general nature of such events, and with the general beliefs associated with such events. Thus you will want to spend a lot of time during this stage making yourself familiar with new topics and types of events that are common in the culture. As you do this, you will increase your ability to understand speech to which you are exposed. I will provide suggestions as to how to do this below. But in the broadest sense, your goal remains the same: get massive comprehensible input. That is, expose yourself to masses of speech (and possibly writing) that you can understand...
Eventually you will reach the point where most of the speech that you hear around you in most situations is reasonably intelligible to you. That is Stage IV. At that point, continuing to receive massive comprehensible input will be a matter of lifestyle. If you choose a lifestyle which largely isolates you from people speaking the language, your progress in acquiring the language will slow to a snail's pace, or cease altogether. But since you are well aware of that, you will put a lot of thought and effort into finding a lifestyle which will support your continued progress in the language, right?
...
To sum up, Principle II is another way of saying that you learn to talk by talking. You might say that you learn how to talk by being exposed to massive comprehensible input, but ultimately you only learn to talk if you talk.
Given what we have said about Principle I, and Principle II, we might consider the following formula to come close to the truth:
Massive comprehensible input + extensive conversational practice = powerful language learning
Assuming you have a strategy for getting comprehensible input, and for getting conversational practice, the path to powerful language learning could hardly be more simple.
1.3.2 You can't speak well unless you can speak poorly.
Now you may be thinking that I'm ignoring your main concern. You feel that no matter how you struggle, you are unable to get the grammar right. If you have been learning the language through a formal language course, mastering the grammar may seem to be the central challenge. Perhaps you even got low marks because of all your errors of grammar. Well, I have good news for you. Errors are great! From here on in, you get high marks for errors, at least in my book, and hopefully, in your own book, too. If you're not making errors, you're not breaking new ground. The pathway to accurate speech is through error-filled speech. I therefore suggest that you move your concern for grammatical accuracy away from center stage. Concentrate on getting comprehensible input and conversation practice, and watch your grammatical accuracy improve without your even focusing on it. I will later suggest ways that you can focus on grammar, as well, but that will be more with a view to mopping up persistent problem areas.
...
Have you ever observed a real person learning English as his or her second language? If you have observed such a person over an extended period, you will have noticed that s/he began by speaking English very poorly, and gradually improved until, hopefully, s/he came to speak English well. It always works like that in real life. Granted some people do better than others both during the early weeks, and in terms of their overall rate of progress, and ultimate attainment, but nobody starts out speaking perfectly. Developing good speaking ability is always a gradual process. I can't understand why my high school French teacher and others like her hadn't noticed that.
When you are first learning a new language, your personal version of the language is very different from the version used by the native speakers...
The existence of interlanguages is one of the main reasons we know that brains know how to learn languages. The interlanguages of people learning a given language, let's say, learning English, go through similar stages, regardless of their mother tongue. For example most people go through this same sequence of patterns in learning to form negative sentences in English...
I say all of this to reassure you that if you keep exposing yourself to comprehensible input, and keep persisting in conversational practice, your speech will keep getting better. Some perfectionistic people don't like this. They would prefer to speak perfectly, or not at all. Well, if you are such a person, swallow your pride. Speak badly. The way to come to be able to speak well is to speak badly for an extended period of time.
So then, speaking the language imperfectly is essential.
...
Principle III says that you must learn to know the people whose language you are learning. All three principles are interdependent. Principle III, like Principle II, is closely related to Principle I (i.e., expose yourself to massive comprehensible input)...
But as we saw in the case of the English word Christmas, learning vocabulary means learning about the areas of human experience to which the vocabulary relates. Or take the word bottle. What if I say, "She screamed and screamed until her mother stuck a bottle in her mouth"? Or how about, "If my husband doesn't get off the bottle, I'm leaving him"? Or perhaps, "We found a note in a bottle". What rich areas of cultural experience, knowledge and belief are linked to this word bottle! Even a simple word like rain is associated with the experience and beliefs of the speech community which uses the word. Knowing vocabulary, which is a key to comprehending input, cannot be separated from knowing the world of the people who speak the language you are learning.
PriIII is also relevant to Principle II, (i.e., engage in extensive extemporaneous speaking). You want to learn to talk about any topic that people talk about. The more you know the right words and phrases, the less you will have to rely on communication strategies. And it is not just a matter of knowing the right words and phrases, and the areas of human experience that these relate to. As you get to know the people well, you also come to know the sorts of things that people talk about, and the ways that they talk about those things...
In another essay (Thomson, 1993c), I explain how that to learn a language is to become part of a group of people. Every language defines group of people, namely, the group of people who accept that language as their contract for communication. When people share a language it means that they agree with one another on a grand scale, and in very deep rooted ways, with regard to how to communicate....
Now, your new language belongs to a different speech community with a different culture, and different shared life experiences. You may share some of the schemas (or, if you prefer, schemata) which arise out of their life experience, but there will be many that you do not share. The more different the new culture is from your olone, the more serious this problem becomes...
There is much that people will tell you about how you should and should not behave. Be aware, that the cultural value system is more complex than those who follow it are aware of, and often the "rule" you are told will be an oversimplification. So you need to keep observing as well as listening.
So then, a basic ingredient of successful language learning is learning to know the people who speak the language, learning to know them in depth, and in detail, learning a large body of knowledge and belief which is shared by all normal speakers of the language, learning about the types of social relationships that exist, and learning values that govern behaviour, including speech behaviour. Some of the techniques and activities discussed below will be in part motivated by Principle III...
When I speak of X number of hours spent on language learning, I am referring to three types of activities. The central activities involve structured language sessions in which a speaker of the language works with you in communication activities which help you to increase your ability to understand and to speak the language. You should tape record some or all of what goes on in your session in order to listen to it later, and possibly to go over parts of it in a subsequent session.
The second set of activities are private ones. For example, you may spend a lot of time listening to the tapes that you made in your sessions. You may also write up your observations regarding how the language works, and add vocabulary items to your personal dictionary. If there is a body of literature in the language, you may do extensive reading in it as a private activity. You may also watch television or listen to the radio. So long as you can understand what you are hearing, this will contribute to your acquiring the language. You may also spend some time reading books or articles about the language. Reading about how the grammar works can benefit your language learning in various ways.
The third set of activities are those involved in developing and carrying on a social life. For some people this comes easily. For people like me, it doesn't happen unless I make it happen. Therefore it really helps if social visiting and other social activities can be made a part of my daily work goals. Thus if I spend thirty hours per week on language learning, these thirty hours might include ten hours spent in language sessions, ten hours of private activities (including the time spent planning and preparing for the language sessions), and ten hours of social visiting and other participation in social activities. Different people will have different blends of these three components, but you should devote reasonable attention to each.
To summarize, the three components of your language learning program are
1. Formal language sessions with someone who is providing comprehensible input and opportunities for extemporaneous speaking.
2. Private activities in which you listen to tapes, read, write, and plan.
3. Social activities in which you use the language, either in understanding messages, in uttering messages, or both.
2.2. Whom do you have?
To become a speaker of a language is to come into relationships. In the broadest sense, you come into a relationship with everyone who speaks the language, in that a language can be thought of a contract which all its users have tacitly agreed to follow. But you will have many specific relationships that are essential to your language learning progress. You cannot learn a language without the right relationships with people.For example, you cannot learn a language very well if your main source of input is television and radio, though these can be valuable resources in a balanced language learning program. From the standpoint of your language learning, the important relationships are of three types:
1. Language Resource Person(s) [ LRP's].
2. Other people with whom you spend a fair amount of time communicating--friends, fellow employees, your parole officer, etc.
3. People with whom you interact in very specific types of encounters, such as the postman, the butcher, or the judge.
A popular catch word in the field of foreign language education is proficiency (see Higgs, 1984; Omaggio, 1986). By proficiency is meant the ability to use the language for authentic purposes in real-life communication situations. A proficiency oriented course will thus be organized around real life communications situations. You might wonder why anyone would want to learn to use the language for any other purposes.
Strange as it may seem, I believe that it is easy to misapply this concept. I knew someone who said that the language learner living in the second language community should never learn anything that s/he does not specifically plan to use in communication. This person offered the example of a friend who had needed to buy shoes. The friend therefore spent several hours memorizing some specific sentences for use in buying shoes, went out and said the sentences from memory to the shoe seller, and returned home excited at having used the language for an authentic purpose. The problem is, how often do you buy shoes? Perhaps some of the sentences will carry over to other situations, but still, it probably isn't realistic to spend several hours memorizing specific sentences for narrowly defined communication situations. There is simply too much to learn and too few hours available for learning it...
There is a related movement for learning languages for specific purposes (Widdowson, 1983). It is recognized that learners will be more motivated to learn material which relates to their area of special need or special interest. For example, if a man is planning to work as a nurse in Thailand, then he will be more motivated to learn if the material he is learning is going to be useful in talking to patients and to other health professionals. Once again, a word of caution is in order. I once heard a nonnative English speaker fluently lecture and answer questions related to his special academic field. While answering one of the questions he started to talk about a party he had recently been to, and quickly became tongue-tied. He could talk about his specialized field almost like a native speaker, but he was not nearly as capable of talking about everyday life. Consider our nurse once again. Once he is in his hospital in Thailand he will be getting extensive exposure to the language of nurses and doctors as they talk to patients and talk to each other on work related matters. Obviously he will want to have some basic ability in dealing with such communication before starting work, but you can pretty well guarantee that, in the course of his day to day work, the nurse will have extensive opportunity to improve his job-related speaking ability, even if he develops little ability to use the language for any other purpose. So then, if you have extra time off the job to devote to language learning, there is much to be said for using some of it to improve your general speaking ability, rather than working further on your job-related speaking ability...
Like other aspects of language and culture, you can learn a certain amount about the rules for conversational interaction by careful observation. However, again as with other aspects of language and culture, you will acquire a large amount subconsciously through massive exposure to people who are conducting conversational interactions.
3.1.5. Focusing on special aspects of the language
If you're at all like me, you probably keep wondering when I will get around to talking about learning the grammar of the language, and improving the accuracy with which you speak the language. How do you find your mistakes? How do you overcome them?
Actually, I haven't been ignoring this issue. First of all, I pointed out that the vast majority of grammatical features of the language, and rules for interaction in the language, you will absorb from comprehensible input in your language sessions and real life situations. As you become thoroughly familiar with the language, you will naturally acquire the ability to use the language correctly with respect to countless details. You will not be aware of most of those details. If you are a linguist, you may be aware of a lot of details. But even if you are a linguist, you will acquire far more than you will be aware of, simply by becoming thoroughly familiar with the language, through massive exposure to comprehensible input.
Secondly, I have talked about things you might do when communication is difficult or when it breaks down. This may happen, for example, while you are relating your activities of the previous day to your LRP. In that case the breakdown may occur because you lack certain vocabulary or sentence patterns. Similarly, if you are unable to understand part of what your LRP or friend says to you, it may be because you lack vocabulary or sentence patterns, or it may be because you lack some area of knowledge regarding local life and culture. When the problem involves a sentence pattern that you have not learned, I suggested that you engage in some communication activity that will provide you with a large amount of exposure to that pattern. For example, Carol Orwig recently told me of learning Nugunu, an African language in which there is a special verb tense form that is used for events which occurred on the previous day, as opposed to events further in the past. It was easy for her to get a lot of exposure to this form by getting people to recount their previous day's activities. And it was easy to get a lot of practice using this form by recounting her own previous day's activities.
Most grammatical details will naturally occur with high frequency in specific kinds of speech. With a small amount of ingenuity you should be able to think of a way to engage in communication which will contain a large number of examples of the particular sentence form you wish to focus on...
There used to be a widespread belief that the learner would benefit from drilling in various ways on particular sentence patterns in the abstract, apart from using the patterns meaningfully in communication. The benefits of such pattern drills have been generally called into question. Your goal is not to be able to produce the pattern as an end in itself, but to use it in communication. You can get just as much practice using a pattern in communication as you can manipulating it in a meaningless pattern drill. Also, designers of pattern drills tended to have the students drill on patterns regardless of whether or not they were ones that caused difficulty. In current language courses, such drills are not used nearly as much nor as widely as they once were, since it is recognized that students need to be learning to communicate extemporaneously in the language. When the students' ability to communicate is hindered by their lack of familiarity with a particular sentence pattern, then it is common practice to stop and focus on that pattern. Or if students consistently make certain errors, there may be some focus on the problem. But the more common concern nowadays is to get the students using the language extemporaneously, both as listeners and as speakers.
Closely related to the issue of grammar is the question of whether you should get people to tell you whenever you "make a mistake". There is a near universal belief among language learners that it is desirable to have every error corrected right while they speak. They may tell people, "Please tell me whenever I make a mistake." But does this really make sense? Remember, it is normal to start out speaking very "poorly" and gradually get better and better. How can people correct every mistake? For a long time, unless you only say things that you have memorized, almost everything you say will be a "mistake" in the sense that you will not say it in the best or most natural way. But you'll get better if you keep talking and talking, and keep being exposed to language that is correctly formed, and within the range of what you can currently understand. The widely accepted view today is that you should mainly concentrate on communicating. Concentrate on understanding people, and on getting your point across. If you do that, your speech will improve. But if people really were to correct your every "mistake", you would get very little communicating done, since you would spend most of your time talking about the form of the language, rather than using the language as best you can to convey your desired meaning...
.2. Stage III language learning activities
That was fast! You're already at Stage III. Imagine how much slower your progress would have been if you had left matters to chance. You might have eventually reached Stage III, and you might not have. You might have developed a certain level of speaking ability, and then become extremely "fluent" in speaking at that low level, without much further improvement. This is called fossilization. But you haven't fossilized, because you have followed a strategy for exposing yourself to concentrated comprehensible input, and for getting extensive practice at extemporaneous speaking. If in addition to using powerful strategies during Stage II, you also used powerful and appropriate strategies during Stage I, and assuming the language is of average difficulty, then you'll have only been learning it for three or four months and already you'll have reached Stage III. Stage III is a long stage. You'll be in Stage III for many months.
At all stages, the goals are the same: get massive comprehensible input, engage in extensive extemporaneous speaking, and get to know the people who speak the language you are learning. Achieving these goals gets easier as you go...
3.2.2. Becoming familiar with unfamiliar topics
However, there is something far more important than getting people to talk to you on familiar topics. There is a severe limit to how far that can take you. What is more important is for you to increase the number of local topics with which you are familiar. This takes us back to the matter of schemas, and the fact that successful communication depends on a large body of shared knowledge and experience. Recall how your understanding of my traffic ticket anecdote depended on your knowing the general schema of how traffic tickets are given in North America. Each culture has a large number of schemas that are partly or wholly unique to it. Also, there will be schemas which are important in your new culture which were far less important in your original culture. I go many years at a time in Canada without ever attending a wedding, and when I do, it is quickly over. In Pakistan, by contrast, weddings are one of the major social events. They are very elaborate, and the activities associated with engagements and weddings go on for days. Now a Pakistani learning my culture would probably think that s/he needed to quickly learn the general Canadian wedding schema. Of course, it is something s/he needs to learn, but it is far less important than s/he might imagine. A Canadian in Pakistan might likewise under-estimate the importance of learning the wedding schema. In either case, it would be a serious mistake to assume that just because the two cultures both have weddings, the schemas are the same, or even similar....
Chapter 4. Conclusion
As I said at the outset, all you really need to remember are three key principles:
-- Principle I: Expose yourself to massive comprehensible input (possibly including written input).
-- Principle II: Engage in extensive extemporaneous speaking (and possibly writing).
-- Principle III: Learn to know the people whose language you are learning.
The rest of what I have written was intended to make these principles meaningful.
link
by Greg Thomson
1.1. Key principles of design for an ongoing language learning program
Language learning is at once complex and simple. When I think of the complexity of language learning, I'm amazed that people succeed. As a linguist, I have spent much of my life puzzling over the complexities of language, and I feel I still understand so very little about any language. Yet people do learn new languages, not only as children, but also as adolescents and as adults. Observing that process only increases my sense of wonder. People learn far more than they are aware that they are learning. How do they do it?
Fortunately, the bulk of the complexity of language learning is handled by your brain, without your even being aware of it. You simply need to give your brain the right opportunity, and it takes over from there. That is where language learning becomes simple. "Giving your brain the right opportunity" can be boiled down to three principles which are easy to grasp, easy to remember and easy to apply:
-- Principle I: Expose yourself to massive comprehensible input. That is, expose yourself to massive doses of speech (and perhaps writing) that you can understand, while gradually increasing the difficulty level.
-- Principle II: Engage in extensive extemporaneous speaking. That is, engage in extensive two-way conversational interaction, and other speaking and writing activities.
-- Principle III: Learn to know the people whose language you are learning. That is, learn all you can about their lives, experiences, and beliefs. Do this in and through the language.
...
Your language learning experience can be divided into four phases. As I say, during the first weeks of your language learning, you were able to understand speech provided it was well supported by pictures, objects or actions. For example, if you were learning English, and I merely told you, "The bump in the middle of my face is my nose", with my hands folded in my lap, and a blank expression on my face, you would not have had a clue what I was saying. But if I pointed at my nose, and said "This is my nose", and then pointed at my mouth and said "This is my mouth", and then at my ear and said, "This is my ear", and then back at my nose and said, "This is my nose," there would have been a good chance you would understand the meaning of "This is my nose", etc. That is because the meaning of what you heard would be made clear by what you saw. In the same way you would quickly come to be able to understand simple descriptions of pictures. That's life in Stage I.
...
Even though you are now beyond Stage I, you will still find that, other things being equal, it is easier to understand someone's description of a picture if you can see the picture than if you can't. That would even be true if you were listening to your mother tongue, but it is much more the case when you are listening to a language that you are still learning. In the case of your mother tongue, even when you can't see a picture that is being described, you can clearly recognize the words that the speaker is using, and understand the spoken sentences in a general way.
During Stage II, you can understand speech if the content is fairly predictable. The main contribution of pictures during Stage I was that they made the content of what was being said partly predictable. But, in listening to statements about pictures, you were typically hearing only single sentences, or at best short sequences of sentences. Assuming you now have developed some skill in understanding isolated sentences and short sequences of sentences, you need to start working on learning to understand longer sequences of sentences. However, in order for you to understand long sequences of sentences at Stage II, the content still needs somehow to be predictable. Here is a simple example of how that is possible. Consider the story of Goldilocks. If you grew up in the English speaking world, you probably know this story well. At the beginning of Stage II you can have someone tell you the story of Goldilocks in your new language, and to your delight, you will find that you can follow what is being said with good understanding of most sentences right as they are spoken. And so you are indeed able to follow a long sequence of sentences with good understanding. You have thus moved from understanding isolated sentences and short sequences of sentences to understanding long sequences of connected sentences. We will have more to say below regarding ways to do this.
At Stage II then, you are able to understand long sequences of sentences provided the content is fairly predictable. Getting comprehensible input at this stage may mean continuing to expose yourself to speech which is supported by pictures, objects or actions, but it can also mean exposing yourself to a large amount of speech which has this property of predictability, as illustrated by the story of Goldilocks.
Since, in addition to the language, the culture and local history is also new to you, there will be many topics which are common, familiar topics to all native speakers of the language, but which are unfamiliar topics for you. Even fairly straightforward accounts of recent events may baffle you because you are unfamiliar with the general nature of such events, and with the general beliefs associated with such events. Thus you will want to spend a lot of time during this stage making yourself familiar with new topics and types of events that are common in the culture. As you do this, you will increase your ability to understand speech to which you are exposed. I will provide suggestions as to how to do this below. But in the broadest sense, your goal remains the same: get massive comprehensible input. That is, expose yourself to masses of speech (and possibly writing) that you can understand...
Eventually you will reach the point where most of the speech that you hear around you in most situations is reasonably intelligible to you. That is Stage IV. At that point, continuing to receive massive comprehensible input will be a matter of lifestyle. If you choose a lifestyle which largely isolates you from people speaking the language, your progress in acquiring the language will slow to a snail's pace, or cease altogether. But since you are well aware of that, you will put a lot of thought and effort into finding a lifestyle which will support your continued progress in the language, right?
...
To sum up, Principle II is another way of saying that you learn to talk by talking. You might say that you learn how to talk by being exposed to massive comprehensible input, but ultimately you only learn to talk if you talk.
Given what we have said about Principle I, and Principle II, we might consider the following formula to come close to the truth:
Massive comprehensible input + extensive conversational practice = powerful language learning
Assuming you have a strategy for getting comprehensible input, and for getting conversational practice, the path to powerful language learning could hardly be more simple.
1.3.2 You can't speak well unless you can speak poorly.
Now you may be thinking that I'm ignoring your main concern. You feel that no matter how you struggle, you are unable to get the grammar right. If you have been learning the language through a formal language course, mastering the grammar may seem to be the central challenge. Perhaps you even got low marks because of all your errors of grammar. Well, I have good news for you. Errors are great! From here on in, you get high marks for errors, at least in my book, and hopefully, in your own book, too. If you're not making errors, you're not breaking new ground. The pathway to accurate speech is through error-filled speech. I therefore suggest that you move your concern for grammatical accuracy away from center stage. Concentrate on getting comprehensible input and conversation practice, and watch your grammatical accuracy improve without your even focusing on it. I will later suggest ways that you can focus on grammar, as well, but that will be more with a view to mopping up persistent problem areas.
...
Have you ever observed a real person learning English as his or her second language? If you have observed such a person over an extended period, you will have noticed that s/he began by speaking English very poorly, and gradually improved until, hopefully, s/he came to speak English well. It always works like that in real life. Granted some people do better than others both during the early weeks, and in terms of their overall rate of progress, and ultimate attainment, but nobody starts out speaking perfectly. Developing good speaking ability is always a gradual process. I can't understand why my high school French teacher and others like her hadn't noticed that.
When you are first learning a new language, your personal version of the language is very different from the version used by the native speakers...
The existence of interlanguages is one of the main reasons we know that brains know how to learn languages. The interlanguages of people learning a given language, let's say, learning English, go through similar stages, regardless of their mother tongue. For example most people go through this same sequence of patterns in learning to form negative sentences in English...
I say all of this to reassure you that if you keep exposing yourself to comprehensible input, and keep persisting in conversational practice, your speech will keep getting better. Some perfectionistic people don't like this. They would prefer to speak perfectly, or not at all. Well, if you are such a person, swallow your pride. Speak badly. The way to come to be able to speak well is to speak badly for an extended period of time.
So then, speaking the language imperfectly is essential.
...
Principle III says that you must learn to know the people whose language you are learning. All three principles are interdependent. Principle III, like Principle II, is closely related to Principle I (i.e., expose yourself to massive comprehensible input)...
But as we saw in the case of the English word Christmas, learning vocabulary means learning about the areas of human experience to which the vocabulary relates. Or take the word bottle. What if I say, "She screamed and screamed until her mother stuck a bottle in her mouth"? Or how about, "If my husband doesn't get off the bottle, I'm leaving him"? Or perhaps, "We found a note in a bottle". What rich areas of cultural experience, knowledge and belief are linked to this word bottle! Even a simple word like rain is associated with the experience and beliefs of the speech community which uses the word. Knowing vocabulary, which is a key to comprehending input, cannot be separated from knowing the world of the people who speak the language you are learning.
PriIII is also relevant to Principle II, (i.e., engage in extensive extemporaneous speaking). You want to learn to talk about any topic that people talk about. The more you know the right words and phrases, the less you will have to rely on communication strategies. And it is not just a matter of knowing the right words and phrases, and the areas of human experience that these relate to. As you get to know the people well, you also come to know the sorts of things that people talk about, and the ways that they talk about those things...
In another essay (Thomson, 1993c), I explain how that to learn a language is to become part of a group of people. Every language defines group of people, namely, the group of people who accept that language as their contract for communication. When people share a language it means that they agree with one another on a grand scale, and in very deep rooted ways, with regard to how to communicate....
Now, your new language belongs to a different speech community with a different culture, and different shared life experiences. You may share some of the schemas (or, if you prefer, schemata) which arise out of their life experience, but there will be many that you do not share. The more different the new culture is from your olone, the more serious this problem becomes...
There is much that people will tell you about how you should and should not behave. Be aware, that the cultural value system is more complex than those who follow it are aware of, and often the "rule" you are told will be an oversimplification. So you need to keep observing as well as listening.
So then, a basic ingredient of successful language learning is learning to know the people who speak the language, learning to know them in depth, and in detail, learning a large body of knowledge and belief which is shared by all normal speakers of the language, learning about the types of social relationships that exist, and learning values that govern behaviour, including speech behaviour. Some of the techniques and activities discussed below will be in part motivated by Principle III...
When I speak of X number of hours spent on language learning, I am referring to three types of activities. The central activities involve structured language sessions in which a speaker of the language works with you in communication activities which help you to increase your ability to understand and to speak the language. You should tape record some or all of what goes on in your session in order to listen to it later, and possibly to go over parts of it in a subsequent session.
The second set of activities are private ones. For example, you may spend a lot of time listening to the tapes that you made in your sessions. You may also write up your observations regarding how the language works, and add vocabulary items to your personal dictionary. If there is a body of literature in the language, you may do extensive reading in it as a private activity. You may also watch television or listen to the radio. So long as you can understand what you are hearing, this will contribute to your acquiring the language. You may also spend some time reading books or articles about the language. Reading about how the grammar works can benefit your language learning in various ways.
The third set of activities are those involved in developing and carrying on a social life. For some people this comes easily. For people like me, it doesn't happen unless I make it happen. Therefore it really helps if social visiting and other social activities can be made a part of my daily work goals. Thus if I spend thirty hours per week on language learning, these thirty hours might include ten hours spent in language sessions, ten hours of private activities (including the time spent planning and preparing for the language sessions), and ten hours of social visiting and other participation in social activities. Different people will have different blends of these three components, but you should devote reasonable attention to each.
To summarize, the three components of your language learning program are
1. Formal language sessions with someone who is providing comprehensible input and opportunities for extemporaneous speaking.
2. Private activities in which you listen to tapes, read, write, and plan.
3. Social activities in which you use the language, either in understanding messages, in uttering messages, or both.
2.2. Whom do you have?
To become a speaker of a language is to come into relationships. In the broadest sense, you come into a relationship with everyone who speaks the language, in that a language can be thought of a contract which all its users have tacitly agreed to follow. But you will have many specific relationships that are essential to your language learning progress. You cannot learn a language without the right relationships with people.For example, you cannot learn a language very well if your main source of input is television and radio, though these can be valuable resources in a balanced language learning program. From the standpoint of your language learning, the important relationships are of three types:
1. Language Resource Person(s) [ LRP's].
2. Other people with whom you spend a fair amount of time communicating--friends, fellow employees, your parole officer, etc.
3. People with whom you interact in very specific types of encounters, such as the postman, the butcher, or the judge.
A popular catch word in the field of foreign language education is proficiency (see Higgs, 1984; Omaggio, 1986). By proficiency is meant the ability to use the language for authentic purposes in real-life communication situations. A proficiency oriented course will thus be organized around real life communications situations. You might wonder why anyone would want to learn to use the language for any other purposes.
Strange as it may seem, I believe that it is easy to misapply this concept. I knew someone who said that the language learner living in the second language community should never learn anything that s/he does not specifically plan to use in communication. This person offered the example of a friend who had needed to buy shoes. The friend therefore spent several hours memorizing some specific sentences for use in buying shoes, went out and said the sentences from memory to the shoe seller, and returned home excited at having used the language for an authentic purpose. The problem is, how often do you buy shoes? Perhaps some of the sentences will carry over to other situations, but still, it probably isn't realistic to spend several hours memorizing specific sentences for narrowly defined communication situations. There is simply too much to learn and too few hours available for learning it...
There is a related movement for learning languages for specific purposes (Widdowson, 1983). It is recognized that learners will be more motivated to learn material which relates to their area of special need or special interest. For example, if a man is planning to work as a nurse in Thailand, then he will be more motivated to learn if the material he is learning is going to be useful in talking to patients and to other health professionals. Once again, a word of caution is in order. I once heard a nonnative English speaker fluently lecture and answer questions related to his special academic field. While answering one of the questions he started to talk about a party he had recently been to, and quickly became tongue-tied. He could talk about his specialized field almost like a native speaker, but he was not nearly as capable of talking about everyday life. Consider our nurse once again. Once he is in his hospital in Thailand he will be getting extensive exposure to the language of nurses and doctors as they talk to patients and talk to each other on work related matters. Obviously he will want to have some basic ability in dealing with such communication before starting work, but you can pretty well guarantee that, in the course of his day to day work, the nurse will have extensive opportunity to improve his job-related speaking ability, even if he develops little ability to use the language for any other purpose. So then, if you have extra time off the job to devote to language learning, there is much to be said for using some of it to improve your general speaking ability, rather than working further on your job-related speaking ability...
Like other aspects of language and culture, you can learn a certain amount about the rules for conversational interaction by careful observation. However, again as with other aspects of language and culture, you will acquire a large amount subconsciously through massive exposure to people who are conducting conversational interactions.
3.1.5. Focusing on special aspects of the language
If you're at all like me, you probably keep wondering when I will get around to talking about learning the grammar of the language, and improving the accuracy with which you speak the language. How do you find your mistakes? How do you overcome them?
Actually, I haven't been ignoring this issue. First of all, I pointed out that the vast majority of grammatical features of the language, and rules for interaction in the language, you will absorb from comprehensible input in your language sessions and real life situations. As you become thoroughly familiar with the language, you will naturally acquire the ability to use the language correctly with respect to countless details. You will not be aware of most of those details. If you are a linguist, you may be aware of a lot of details. But even if you are a linguist, you will acquire far more than you will be aware of, simply by becoming thoroughly familiar with the language, through massive exposure to comprehensible input.
Secondly, I have talked about things you might do when communication is difficult or when it breaks down. This may happen, for example, while you are relating your activities of the previous day to your LRP. In that case the breakdown may occur because you lack certain vocabulary or sentence patterns. Similarly, if you are unable to understand part of what your LRP or friend says to you, it may be because you lack vocabulary or sentence patterns, or it may be because you lack some area of knowledge regarding local life and culture. When the problem involves a sentence pattern that you have not learned, I suggested that you engage in some communication activity that will provide you with a large amount of exposure to that pattern. For example, Carol Orwig recently told me of learning Nugunu, an African language in which there is a special verb tense form that is used for events which occurred on the previous day, as opposed to events further in the past. It was easy for her to get a lot of exposure to this form by getting people to recount their previous day's activities. And it was easy to get a lot of practice using this form by recounting her own previous day's activities.
Most grammatical details will naturally occur with high frequency in specific kinds of speech. With a small amount of ingenuity you should be able to think of a way to engage in communication which will contain a large number of examples of the particular sentence form you wish to focus on...
There used to be a widespread belief that the learner would benefit from drilling in various ways on particular sentence patterns in the abstract, apart from using the patterns meaningfully in communication. The benefits of such pattern drills have been generally called into question. Your goal is not to be able to produce the pattern as an end in itself, but to use it in communication. You can get just as much practice using a pattern in communication as you can manipulating it in a meaningless pattern drill. Also, designers of pattern drills tended to have the students drill on patterns regardless of whether or not they were ones that caused difficulty. In current language courses, such drills are not used nearly as much nor as widely as they once were, since it is recognized that students need to be learning to communicate extemporaneously in the language. When the students' ability to communicate is hindered by their lack of familiarity with a particular sentence pattern, then it is common practice to stop and focus on that pattern. Or if students consistently make certain errors, there may be some focus on the problem. But the more common concern nowadays is to get the students using the language extemporaneously, both as listeners and as speakers.
Closely related to the issue of grammar is the question of whether you should get people to tell you whenever you "make a mistake". There is a near universal belief among language learners that it is desirable to have every error corrected right while they speak. They may tell people, "Please tell me whenever I make a mistake." But does this really make sense? Remember, it is normal to start out speaking very "poorly" and gradually get better and better. How can people correct every mistake? For a long time, unless you only say things that you have memorized, almost everything you say will be a "mistake" in the sense that you will not say it in the best or most natural way. But you'll get better if you keep talking and talking, and keep being exposed to language that is correctly formed, and within the range of what you can currently understand. The widely accepted view today is that you should mainly concentrate on communicating. Concentrate on understanding people, and on getting your point across. If you do that, your speech will improve. But if people really were to correct your every "mistake", you would get very little communicating done, since you would spend most of your time talking about the form of the language, rather than using the language as best you can to convey your desired meaning...
.2. Stage III language learning activities
That was fast! You're already at Stage III. Imagine how much slower your progress would have been if you had left matters to chance. You might have eventually reached Stage III, and you might not have. You might have developed a certain level of speaking ability, and then become extremely "fluent" in speaking at that low level, without much further improvement. This is called fossilization. But you haven't fossilized, because you have followed a strategy for exposing yourself to concentrated comprehensible input, and for getting extensive practice at extemporaneous speaking. If in addition to using powerful strategies during Stage II, you also used powerful and appropriate strategies during Stage I, and assuming the language is of average difficulty, then you'll have only been learning it for three or four months and already you'll have reached Stage III. Stage III is a long stage. You'll be in Stage III for many months.
At all stages, the goals are the same: get massive comprehensible input, engage in extensive extemporaneous speaking, and get to know the people who speak the language you are learning. Achieving these goals gets easier as you go...
3.2.2. Becoming familiar with unfamiliar topics
However, there is something far more important than getting people to talk to you on familiar topics. There is a severe limit to how far that can take you. What is more important is for you to increase the number of local topics with which you are familiar. This takes us back to the matter of schemas, and the fact that successful communication depends on a large body of shared knowledge and experience. Recall how your understanding of my traffic ticket anecdote depended on your knowing the general schema of how traffic tickets are given in North America. Each culture has a large number of schemas that are partly or wholly unique to it. Also, there will be schemas which are important in your new culture which were far less important in your original culture. I go many years at a time in Canada without ever attending a wedding, and when I do, it is quickly over. In Pakistan, by contrast, weddings are one of the major social events. They are very elaborate, and the activities associated with engagements and weddings go on for days. Now a Pakistani learning my culture would probably think that s/he needed to quickly learn the general Canadian wedding schema. Of course, it is something s/he needs to learn, but it is far less important than s/he might imagine. A Canadian in Pakistan might likewise under-estimate the importance of learning the wedding schema. In either case, it would be a serious mistake to assume that just because the two cultures both have weddings, the schemas are the same, or even similar....
Chapter 4. Conclusion
As I said at the outset, all you really need to remember are three key principles:
-- Principle I: Expose yourself to massive comprehensible input (possibly including written input).
-- Principle II: Engage in extensive extemporaneous speaking (and possibly writing).
-- Principle III: Learn to know the people whose language you are learning.
The rest of what I have written was intended to make these principles meaningful.
link
Subscribe to:
Posts (Atom)