«INDIVIDUALIZING ELEMENTARY GENERAL MUSIC INSTRUCTION: CASE STUDIES OF ASSESSMENT AND DIFFERENTIATION By Karen Salvador A DISSERTATION Submitted to ...»
Chapter Summary Carrie Davis took considerable risks in participating in this project during a busy time of the school year. I observed teaching that she felt was not her best, such as when she drilled recorders. I also saw her try out something unfamiliar—facilitating small-group compositions for performance. I am grateful that she shared these teaching moments with me, and allowed me to analyze them and report my findings. I wanted to see real-life teaching, and I have worked diligently to honor her participation with an honest portrayal.
Ms. Davis reported use of a variety of assessment methods in her typical teaching style.
She used PMMA twice a year in most of the grades she taught, graded students on report cards twice a year, and used other formal assessments such as note recognition tests. She advocated a model in which assessments were used to inform instruction rather than as a way to grade students. Carrie felt that assessments needed to be of individual performance in order to be useful, and felt that the number of students she taught and how infrequently she saw them made this difficult. Information on Ms. Davis’s assessment practices was difficult for me to triangulate, because I did not observe her typical teaching.
During the observation period, Carrie’s third grade students wrote their own minimusicals for their end of year performance. These projects were undertaken in flexible groups, resulted in various work styles, combinations of background knowledge, student leadership roles, and levels of response sophistication. Sometimes, student leaders emerged, and other times they were assigned. The projects were student-centered in that students chose topics, wrote scripts, and directed the mini-musicals. Ms. Davis assessed for performance readiness but did not appear to track music learning as a result of these activities. The composition activity culminated in performance for an audience and an extended self-evaluation, in which students primarily commented on social aspects of the project.
Ms. Davis taught students with cognitive impairments (CI) in both self-contained and mainstreamed settings. In the self-contained CI class, she used an early childhood approach that she learned in a Music Learning Theory certification course. This approach seemed appropriate both in terms of cognitive and musical readiness for the CI population. Carrie negotiated a positive relationship with the CI paraprofessionals, who were valued experts on the students’ needs, and who participated as active, enthusiastic musical models. I was able to observe two students in both self-contained and mainstreamed settings. At their parents’ request, Katie was socially mainstreamed, while curricular material was adapted to meet Zack’s music learning needs. It appeared that the inclusion model used for Zack may be more beneficial to students with special needs.
Based on observation, interviews, and think alouds, Ms. Davis’s teaching philosophy seemed constructivist. She functioned as a facilitator in her classroom, with a persona that involved questioning and required students to think of their own solutions. Her role as a facilitator extended to classroom management and transferred much of the responsibility of management onto the students. As a facilitator, she improvised new lessons when students demonstrated a need for scaffolding or when the material she had planned seemed unfit for the social needs of the students that day. Ms. Davis’s constructivism also was apparent in her use of choice and centers, as well as flexible grouping practices. The primary result of Carrie’s constructivism and facilitation seemed to be a cooperative and collaborative learning atmosphere. Balancing musicking with the discussion and consensus building required to create that atmosphere was sometimes difficult, and some students may have benefited from more guidance, both in terms of their behavior and their music learning.
I did not observe much evidence of assessment practices, just as Ms. Davis had feared in our initial interview. I think music learning was difficult to assess because there were no specific goals for any of the projects. Good assessment flows naturally from a solid curriculum reflected in planned learning. However, such a direct instruction model may be prone to a lack of studentcentered features such as student-chosen topics, valuing student background (musical and otherwise), or allowing differentiation of learning style, response style, or level of musical sophistication. Ms. Davis’s teaching did allow this differentiation. Perhaps optimal music teaching and learning would occur somewhere in the middle of this continuum. While I did not see the connection I was looking for between assessment practices and differentiation of instruction in Ms. Davis’s teaching, I did see differentiated instruction. In Ms. Davis’s classes, differentiation was more often social in nature than related to sequential music learning. This may be where assessment has a role to play: ensuring that individual students progress musically.
Hailey Stevens: Assessment and Differentiation Intertwined Hailey Stevens’ eyes twinkle and her nose wrinkles as she laughs “Oh, no! I didn’t trick any of you! Let’s see if you can get this one!” First grade students sit cross-legged on the floor in three rows, hands in their laps and eyes intent on their teacher. They all know that the next challenge might be for the whole group or any individual in the group. An egg timer buzzes, and the kids groan. “Well, I guess I’ll have to wait and get you next time, vegetables are done for today.” Ms. Stevens starts to sing a folk song, and continues to sing as she puts the class binder down on her music stand and uses sign language to tell the students to stand. They follow her nonverbal instruction and she leads them in a movement activity related to levels of beat in a song with paired triple meter. Movement is easy in this classroom, where the only chairs are behind Ms. Stevens’ desk and at the six computer stations. One wall features a white board, and the remaining walls are filled with shelving and cabinets where instruments and props are stored. Orff instruments on and off stands fill the corner of the room across from Ms. Stevens’ desk.
The movement activity has ended, and the students are standing on the blue circle ornamenting the otherwise drab grey carpeting. Ms. Stevens picks up a small stool, and the children grin and wiggle in anticipation of continuing the game they started last week. In the context of the game, each child will get at least one turn to stand on the stool and make up a sung pattern for the rest of the class to echo. The game includes a song, which the students sing without Ms. Stevens’ assistance. She comments on each student’s performance and records that they have had a turn in her palm pilot. I know that she is actually rating their performance in terms of singing voice development and adherence to the tonality of the game song. The game continues for about fifteen turns, maybe four minutes, and then they move on to a new activity.
When my initial email asking Hailey Stevens to participate in this dissertation went unanswered, I was disappointed. From the beginning of my doctoral study, I had been urged to go see Hailey teach, because my advisor thought so highly of her. When I solicited advice on possible participants from faculty at other universities in the area, Ms. Stevens’ name was at the top of several lists. I decided to email her again and was pleased when she responded with questions about what participation would entail and then ultimately agreed to participate. The following chapter explores each of my guiding questions and the themes that emerged from data analysis, including the impact of Ms. Stevens’ beliefs on her teaching and how the environment she created in her classroom impacted assessment and differentiation of instruction.
When and How was Music Learning Assessed?
In Hailey Stevens’ teaching, assessment was a part of nearly every activity, and several activities in each class were designed to allow formal tracking of individual student progress on specific musical skills. Therefore, the discussion of when and how Ms. Stevens assessed students will be combined. Learning Sequence Activities (LSAs) and embedded assessments took place in every class, but some assessments—report cards, aptitude testing, and written assessments—took place less frequently, so I will discuss those first.
Report cards. Ms. Stevens was required to grade her students once a year using report cards supplied by her school district. Hailey did not like grading only once, “…because it only gives that one snapshot, it doesn’t show any progress over time… I would like to do [report cards] first trimester and last trimester, so there is some time to show growth” (HS Initial Interview, p. 3). As a district, the music teachers decided not to grade kindergarten students, which Ms. Stevens liked, “[b]ecause they are all so young, and developmentally they are all in different places” (HS Initial Interview, p. 2). In grades 1 through 5, the district mandated grading on two grade-level specific benchmarks. The report cards also provided two blank slots where individual teachers could fill in benchmarks they wished to assess. “Some teachers just do the two required. I like to put in the additional two, so the parents have that information” (HS
Initial Interview, p. 2). Ms. Stevens described the report card grading system:
The grading system… aligns with what the classroom teachers do. N is novice, D is developing, so they are progressing towards grade level, P is proficient, so they are at grade level, and it used to be H for high achievement. Which they’ve just recently gotten rid of, and now you can give a P+, which is really the same thing, right? [chuckles] So the student is consistently achieving, going above and beyond grade level expectations (HS Initial Interview, pp. 2-3).
I asked what she thought of this system, and she replied:
I like the system, that it’s not ABCDE traditional letter grades, because it kind of takes away from that label. Like, an A is a good student and a D is a bad student. It kind of takes us away from that mentality, to really focusing on: are they achieving the benchmark? Are they progressing towards the benchmark? (HS Initial Interview, p. 3) I asked if she thought that the expectation of grading on a report card affected her instruction or student learning, and she said, I try not to let it. I’m not the kind of person who says to the kids: “You’re gonna get a grade on this” or “When I do your report card…[shakes her finger and scowls]” You know what I mean? I don’t hang grades over their heads…. (HS Initial Interview, p. 4).
Ms. Stevens indicated in our first interview that report cards were not the main reason to assess musical skills and abilities (HS Initial Interview, p. 1). In the course of our conversations and my observations, it became clear that Ms. Stevens collected more assessment information
than was reflected on the report cards:
I would say, depending on the grade level, maybe half of it is on the report card. The other half is just things for me that inform my teaching and help me keep track of [students’] progress for my own sake (HS Final Interview, p. 5).
Ms. Stevens seemed to separate grading for report cards from everyday assessment in her classroom.
I just feel like those are two different purposes for assessment. There is the one side that informs your own teaching, and helps you adjust your instruction to the students. And then there is the assessment that you use when you actually have to make those grades.
The one that is just for me… and it is going to help me teach them better. And the other one, everybody else sees… (HS Final Interview, p. 5).
Most of the assessments Ms. Stevens implemented were integrated into regular instructional
activities. However, sometimes Hailey would assess specifically for report cards:
And the things where I feel like we are just focusing on the assessment [rather than instuction]… [I]t’s usually something for the report card that I don’t really care about.
Like in 2nd grade… we have to assess if [students] can identify step, skip, leap and repeat in notation. I don’t really care about that for my second graders, I want them hearing it, and being able to do it…. So I teach it with one or two activities, we go over it, I do like a token little written assessment, and that’s it (HS Final Interview, p. 8).
Ms. Stevens seldom assessed acontextually, but when she did, it was because the report cards included mandated benchmarks that she did not view as valuable to the students’ music learning.
Aptitude testing. Ms. Stevens administered the Primary Measures of Music Audiation (PMMA; Gordon, 1986) to lower elementary students and the Intermediate Measures of Music Audiation (IMMA; Gordon, 1986) to upper elementary students every fall and every spring, “…so I can see where each student’s potential is, tonally and rhythmically” (Initial Interview, p.
5). PMMA and IMMA both consist of tonal and rhythm subtests, which each take about thirty minutes to administer. Testing materials are aural, and students do not need to be musically literate or literate in written English to answer. Scoring students’ responses results in a percentile ranking of music aptitude, which is normed (Gordon, 1986).
Written assessments. Ms. Stevens administered a few written assessments of students’ musical comprehension, each directly related to measuring benchmarks required by the report cards (HS Final Interview, p. 8). During the observation period for this study, I observed two written quizzes in first grade. One tested students’ ability to tell same from different when Ms.
Stevens sang brief tonal examples (HS Field Notes 2/25, p. 3), and the other assessed their ability to label form (e.g., ABA) in aural examples of familiar and unfamiliar songs without words (HS Field Notes 3/18, p. 2). Each of these tests took about 20 minutes of the 40-minute music class.
I did not observe similar written tests in third grade. Ms. Stevens did not like to assign written