«INDIVIDUALIZING ELEMENTARY GENERAL MUSIC INSTRUCTION: CASE STUDIES OF ASSESSMENT AND DIFFERENTIATION By Karen Salvador A DISSERTATION Submitted to ...»
I don’t tend to do much written work/assessment, especially with younger grades. You happened to see two written assessments in 1st grade recently because I have to assess those benchmarks for their report cards. The elementary music department has decided that identifying same/different musical ideas is one of the four benchmarks that should be reported on the card. This is something I do teach, but I’ve had to create written tools to formally assess it. Typically, rather than written work, I prefer to assess students’ skills in a musical way, such as through singing, moving, and playing. I tend to value (and thus focus on) the skills and knowledge that can be measured in those musical ways over the skills and knowledge that are measured in writing (HS Journal 3/18, p. 2).
Ms. Stevens described written assessments as a quick way to gauge students’ understanding of a concept (HS Journal 3/18, p. 2). However, she was concerned that written assessments were “not effective for measuring musical skill development,” and that they “[m]ay not truly indicate students’ understanding of concepts being measured (if directions are not understood, if the student has special needs that hinder their ability to complete written tasks, etc.)” (HS Journal 3/18, p. 2). Ms. Stevens used written assessments only when she felt she needed a more formal summative record of students’ abilities to corroborate report card grades.
Learning Sequence Activities. Ms. Stevens began every class with “Learning Sequence Activities” (LSAs; HS Think Aloud 1, p. 1). LSAs are a sequential teaching and assessment activity designed to help individual students progress musically (see Gordon, 2007). LSAs typically lasted about 5 to 7 minutes, and this was the only time that students sat in assigned places, in three rows on the carpet. Ms. Stevens set an egg timer and stood next to a stand that held her binder containing seating charts and instructions for the current LSA for each class.
LSAs could be tonal or rhythmic and involved a variety of response modes, including echoing chanted material or tonal phrases, responding with an improvised “answer,” responding with resting tone, labeling musical features with words, and associating solfege. Ms. Stevens would sing or chant cues and either gesture to the group or an individual to cue a response. Sometimes she would respond with the student (“teaching mode”) or allow the student to respond alone (“evaluation mode”). When an individual responded correctly, Ms. Stevens marked this on her chart.
Each LSA had easy, moderately difficult, and difficult prompt levels for the skill being taught (Gordon, 2007). All students were presented with easy pattern, and when they accomplished it in teaching and evaluation modes (as described above), they were presented with the medium pattern in teaching mode, and so on. The class would move to the next LSA
according to the following guidelines:
…the general guideline [is to] mov[e] on when 80% of the class reaches the achievement level that matches their aptitude (low aptitude students achieve at least the easy level;
high aptitude students achieve easy, moderately difficult, & difficult). Usually this happens in 2-4 class periods. If 80% of the class is not achieving at their appropriate level in 2-4 class periods, then I assume that they may not be quite ready for that skill and need some more experiences to develop that readiness before we go back to it (HS
Not every student would get an individual turn during LSAs every day, but every student would participate individually in each LSA before moving on to a new one.
Students in Ms. Stevens’ classes seemed to enjoy LSAs. Ms. Stevens called LSAs “vegetables,” and described them as the work the students needed to do before they could get on to “dessert:” the fun activities she had planned for the rest of their music class that day. In first grade, Hailey made “vegetables” into a game, in which she tried to “catch” individual students or “trick” them. The students giggled, and Ms. Stevens growled, groaned, and laughed as she “sneakily” tried to “catch” students unaware and marked “their turn” in her LSA binder (e.g., HS Field Notes 2/23, p. 3). In third grade, Ms. Stevens remained playful, but in ways appropriate for the older students. At this level, she also talked about how she was not capable of these tasks until she was in college (e.g., associating solfege to tonal patterns) and “challenged them” to show what they could do (e.g., HS Field Notes 3/2, p. 1). I asked about students’ responses to LSAs, and Hailey responded, They are all different. You have some kids that are just always lazy, no matter what it is you are doing… always some kids that you are going to have to pull along. There [are] also a lot of kids, who [think LSAs are] a lot of fun. Like Mike... the other day we had to do aptitude testing, and I said “Oh, we don’t have time for vegetables today” and he said “Oh, man!” because he loves it. He has a lot of fun doing it (HS Final Interview, p. 11).
Perhaps because of how Hailey presented LSAs, most students I observed seemed to anticipate eagerly the opportunity to respond—sitting tall with sparkling eyes focused on their teacher.
When I asked about the strengths and weaknesses of LSAs, Ms. Stevens replied:
Well, I think one of the weaknesses is, some people think that you have to do it a certain way, follow all the rules exactly, that you have to toe the line in that respect, rather than playing with it, finding what works… for you, what works for your kids. So I think that can be a weakness. If you are too rigid with it, that’s definitely a weakness.
Strengths… I think it makes ME accountable. It forces ME to give each student my attention and individualize instruction for where they are. It forces ME to look at each student’s potential. And to see if they are achieving at a level that matches what their potential is… and it forces me to keep track of their progress… It forces me to HEAR students individually in the first place, so that they can build skills (HS Think Aloud 1,
LSAs offered a daily opportunity to teach and assess sequential music learning. Ms. Stevens used encouragement and humor to make LSAs an enjoyable part of her classroom routine.
Embedded assessments. Ms. Stevens embedded assessments into her music instruction, so that she was constantly informally and formally tracking the music learning progress of individual students as well as the class as a whole. Hailey frequently checked group comprehension of musical concepts. For example she asked classes to: identify musical features (e.g., form, HS Field Notes 3/9, p. 3); demonstrate movement responses (e.g., in response to changes in instrument timbres; HS Field Notes 3/2, p. 3); and read notation as a group (e.g., HS Field Notes 3/9, p. 1). However, these informal observations seemed to function as teaching tools or as a way to allow students to practice content, rather than as assessments. In addition, Ms. Stevens monitored such whole-group musicking activities as folk dances (e.g., HS Field Notes 3/9, p. 2), singing in three-part chords under a melody (e.g., HS Field Notes 3/4, p. 1), and accompanying singing with body percussion (e.g., HS Field Notes 3/2, p. 2). Hailey never reported these types of activities when she described assessments in her journal, instead focusing on activities that allowed her to collect formal data regarding individual student responses.
In a typical class period, Ms. Stevens began with LSAs. The remainder of music class time would be spent on a variety of instructional activities, including singing, movement, playing instruments, listening to music, and a few rare instances of written work or brief lecture-style instruction. Assessments were embedded in instructional activities in the form of frequent opportunities for individual children to sing, play, or move independently. Ms. Stevens rated these solo performances using four-point rating scales specific to each activity.
I find [rating scales] to be really helpful because it is an easy way of having a standard, a high and a low, and then you can compare students with your standard. So, I think rating scales are really effective… And an effective way of [assessing] quickly, and in a manageable way (HS Final Interview, p. 7).
To illustrate the nature of Ms. Stevens’ embedded assessments, I will describe one activity from each grade level I observed. In first grade, Ms. Stevens gestured to individual students and chanted in triple meter, “Hickety pickety bumblebee, will you chant a pattern for me?” (Field Notes 3/23, p. 3). In response, the student chanted a four-macrobeat rhythm on neutral syllables, and then the remainder of the class echoed the rhythm. Using a palm pilot, Hailey recorded which students had a turn by rating their improvised rhythm performance using a four-point scale. About eight students had turns for this activity, and responses included one child who used a pickup, several responses of the same rhythm (Figure 6.1), and two students who used prolonged elongations. The students who did not get turns knew that they would have a turn for this activity another day, because Ms. Stevens rarely stayed with one activity long enough for every student in a class to take a turn on the same day.
Figure 6.1 Common “Improvised” Response In third grade, students reviewed “Sarasponda,” a song that they had learned in second grade (HS Field Notes 2/23, p.
2). Students sang the melody while Ms. Stevens sang chord roots (do, fa, and sol), and then students sang the chord roots while she sang melody. Some students seemed confused by fa, and Hailey confirmed in her journal that this was the first time students had added IV (fa) to their externalized harmonic vocabulary, which previously consisted of I (do), V (sol), i (la), and v (mi) (HS Journal 2/23, p. 1). With little further instruction, groups of four students played the chord roots on barred instruments to accompany the class as they sang.
Ms. Stevens marked turns in her grade book by rating each student’s performance using a fourpoint scale. Perhaps because four students performed at the same time, each student had at least one turn in this activity.
I asked Ms. Stevens if all these assessment activities interfered with instruction. She replied, “Mm mm [shakes head “no”]. It could. But I try to integrate it as much as possible and just make it part of the process. I do my assessments on things we would be doing anyway. So I
don’t feel it interrupts” (HS Initial Interview, p. 4). She elaborated further in our final interview:
Most of the time when I plan an assessment it is not just for the purpose of assessment.
The assessment is just an outgrowth of—this is something that is important for [students] to experience and learn, so we are going to do this, and I’m going to keep track of it just so that I know where to go next… [There is m]ore a focus on the learning, and the sequential learning than the assessment itself… I don’t feel like [assessment] ever intrudes on what we are doing. I try to just make [assessment] a natural part of [music
A simple tally of my field notes revealed that, in addition to daily LSAs, Ms. Stevens rated individual musical responses one to three times per class. Typically, about a third or a half of the students in a class gave individual responses as part of an activity before the class moved on to something else, and Ms. Stevens returned to the activity in subsequent classes to hear the remaining students. Ms. Stevens viewed assessment as a natural, embedded part of sequential music learning, which allowed her to track individual progress and adjust her instruction accordingly.
Summary of when and how music learning was assessed. Ms. Stevens assessed music learning in a variety of ways. She graded on report cards once a year and administered aptitude tests in the fall and spring. Hailey infrequently administered written quizzes and expressed concerns that the written format was not the best way to measure music learning. Every music class, Ms. Stevens’ students participated in LSAs, which were both a teaching tool and an assessment activity. Hailey observed group musicking and checked for group understanding of conceptual information but did not characterize these activities as assessments in her journal.
Most assessments were embedded in instructional activities, and Ms. Stevens viewed them as a natural component of instruction.
Scoring and Tracking the Results of Assessments Ms. Stevens’ assessment methods resulted in a variety of types of data. Aptitude tests produced percentile rankings of tonal aptitude and rhythm aptitude, which Ms. Stevens recorded in her grade book and on the seating charts in her LSA binder. Written quizzes were scored as a number of correct answers out of the number of possible answers, and this information was recorded on an assessment spreadsheet in Ms. Stevens’ computer (e.g., HS Journal 3/18, p. 2).
Scoring procedures for LSAs and embedded assessments were more complicated.
Scoring LSAs. Ms. Stevens kept a binder on a music stand by her keyboard in the front of the classroom, where she also kept an egg timer and pencil. The binder contained sheets for recording class progress on LSAs that were photocopied from a workbook (e.g., Gordon, 1990).
Each sheet included directions for the LSA including easy, moderately difficult, and difficult prompts when applicable, and space for a seating chart. As described above, LSA prompts were directed to individual students by using hand gestures, and then the student would respond, first in teaching mode (with Ms. Stevens) and then in evaluation mode (alone). Each student must first correctly respond in teaching mode before progressing to evaluation mode at any level, and must correctly respond at the easy level before progressing to moderately difficult and then to difficult (see Gordon, 2007; Hailey would sometimes skip teaching mode or skip the easy level for some students, HS Journal 3/16, p. 2). Usually, Ms. Stevens marked a tally next to each child’s name when he or she correctly responded at each level--one tally for easy teaching mode, another for easy evaluation mode, and so on. Five tally marks would indicate a child who had completed teaching mode at the difficult level.