Sounds
“...pronunciation is one of the highest ranking aspects of student interest in many different countries.” (Miyake, 2004)
Listeners construct sound-meaning correspondences out of what they conceive of as discrete segments of sound. But in actual speech, these segments blur together into one seemingly continuous stream.
Students thus often don't hear important English sound patterns on their own. Both in music and language, students “hear with an accent” (Patel, 2008; Celce-Murcia, Brinton, & Goodwin, 2010).
In addition, each language has its unique phonological structure. To someone who is learning a new language, these facts can make comprehension of even semantically and syntactically simple sentences well nigh impossible (ibid).
Turning from listening to speaking, when learners don't incorporate the correct sounds of natural English into their own speech, the result is at best an accent, and at worst total incomprehensibility (Celce-Murcia, Brinton, & Goodwin, 2010).
So both listening and speaking a language fluently imply mastery of complicated sound patterns unique to each language.
Music not only makes language sounds and sound patterns more noticeable, it provides a ready source of exemplary sounds for imitation. And students pronounce language sounds they learn from music more accurately than those they learn from recorded speech (Speh & Ahramjian, 2009).
Music and language both rely on the same four acoustic dimensions: pitch, loudness, duration, and timbre (Moreno, 2009). All four of these dimensions help listeners segment the flow of speech into meaningful words, phrases, and sentences. By intentionally selecting music which exploits differences in these four dimensions, teachers enhance target language, provide more predictability when introducing new vocabulary, and avoid cognitively overloading students.
Under the umbrella of “Sounds,” I discuss:
Listeners construct sound-meaning correspondences out of what they conceive of as discrete segments of sound. But in actual speech, these segments blur together into one seemingly continuous stream.
Students thus often don't hear important English sound patterns on their own. Both in music and language, students “hear with an accent” (Patel, 2008; Celce-Murcia, Brinton, & Goodwin, 2010).
In addition, each language has its unique phonological structure. To someone who is learning a new language, these facts can make comprehension of even semantically and syntactically simple sentences well nigh impossible (ibid).
Turning from listening to speaking, when learners don't incorporate the correct sounds of natural English into their own speech, the result is at best an accent, and at worst total incomprehensibility (Celce-Murcia, Brinton, & Goodwin, 2010).
So both listening and speaking a language fluently imply mastery of complicated sound patterns unique to each language.
Music not only makes language sounds and sound patterns more noticeable, it provides a ready source of exemplary sounds for imitation. And students pronounce language sounds they learn from music more accurately than those they learn from recorded speech (Speh & Ahramjian, 2009).
Music and language both rely on the same four acoustic dimensions: pitch, loudness, duration, and timbre (Moreno, 2009). All four of these dimensions help listeners segment the flow of speech into meaningful words, phrases, and sentences. By intentionally selecting music which exploits differences in these four dimensions, teachers enhance target language, provide more predictability when introducing new vocabulary, and avoid cognitively overloading students.
Under the umbrella of “Sounds,” I discuss:
- Pitch
- Pitch - so what?
- Musical and linguistic pitch
- Pitch for teachers: Music selection; activities
- Timbre
- Timbre - so what?
- Musical and linguistic timbre
- Timbre for teachers
- Rhythm
- Rhythm - so what?
- Musical and linguistic rhythm
- Rhythm for teachers: Music selection; activities