«INDIVIDUALIZING ELEMENTARY GENERAL MUSIC INSTRUCTION: CASE STUDIES OF ASSESSMENT AND DIFFERENTIATION By Karen Salvador A DISSERTATION Submitted to ...»
However, Brummet’s study examined teachers’ use of the framework and did not delve into the music learning experiences or achievements of the students (p. 229). That is, while the processfolios contained detailed records of individual student progress based on a variety of measures, Brummett’s research report instead described the students’ and teachers’ perceptions regarding the assessment framework. Therefore, data that may have indicated precisely how the processfolios contributed to individual learning were not included. However, Brummett did mention that students believed that the elements of group work, reflection, and self/peer evaluation contributed to learning in the classroom. She also stated that teachers agreed with her concept of teaching-learning-evaluating as a continuum and embraced the process-oriented framework (p.248).
Niebur (2001) based the book Incorporating Assessment and the National Standards for Music Education into Everyday Teaching on her dissertation from 1997. In it, she provided a narrative (including vignettes and thick description) of the standards-based teaching and assessment of four teachers in Arizona. Rather than taking a quantitative approach, Niebur chose to explore the experiences of her four participants in depth, looking for themes that resonated with her experiences as a music teacher and that might seem true to others practicing in the field (p. 8-9). Niebur presented a holistic picture of these teachers and their teaching, so standards and assessment were presented as they interacted in real teaching rather than discussing them in isolation. In a design similar to the current study, Niebur did not attempt to propose optimal definitions or uses of standards or assessments, or to evaluate the relative success of any particular approach to them. She simply described how “reflective teachers” integrated standards and assessment into their teaching as a way to inform other teachers and researchers and allow them to draw their own conclusions regarding the meaning and usefulness of the methods or approaches described (p. 9-10).
Niebur’s participants were four practicing teachers who had just completed a graduate course on measurement in music education at Arizona State University. As part of the class requirements, each participant implemented a new assessment plan to track students’ individual progress toward a musical goal of their choice in a single classroom of students. The professor recommended these participants based on the quality of their assessment assignments and reflective ability. Niebur shadowed each informant for five full school days (rarely consecutive), and also attended selected classes, performances, and special events that related directly to the classes she had observed. During this time period, the study participants also met seven times for group discussion led by the measurement professor. Niebur acted as a participant-observer during these meetings. Finally, Niebur conducted formal and informal interviews with each participant and invited the participants’ feedback in the form of member checks.
Following is a summary of Niebur’s portrayal of one of the teachers as an example.
Niebur described Stephanie Martin in the midst of teaching a recorder unit to two third grade classes. For her graduate school project, Ms. Martin was examining the effect of alternative assessment practices, such as journal writing, on her student’s recorder achievement. While both classes learned the same material (the notes B, A, and G) over the course of the 4-week study period, only one of the classes wrote in daily journals and received written feedback from Ms.
Martin. After a week of instruction, each student played individually for Ms. Martin so that she could check if each student was blowing correctly, covering the holes sufficiently, and holding the recorder with the left hand on top. She told one student, “I just want to hear what you’re going to do for me and if I need to help you some more” (p. 71). Niebur reported that she coached several students to help them achieve a better performance in this brief assessment.
Clearly, this assessment allowed Ms. Martin to individualize instruction, not only because she could hear individual progress, but also because she had a few minutes to interact individually with each child.
At the end of 4 weeks, when students played their final patterns for a videocamera,
…in this classroom where learning is a living, social experience and where students regularly risk performance, freely discuss their triumphs and mistakes, then immediately incorporate their insights into a new performance, today’s stilted silence feels unnatural and unproductive, even unfair. For a few moments, the demands of a test that is specifically designed to generate statistical information is at odds with the ongoing culture of assessment that nourishes the students and the teacher inside this classroom (p.
Niebur and Ms. Martin were both concerned about the effect that the videocamera had on the students’ responses, although the recordings were intended to contribute to the validity of the assessment measure. The quantitative study revealed no significant differences in performance ability between children who journaled and those who did not. However, Ms. Martin stated, “the time for journal writing was well-spent, because it reinforced and preserved a written record of what the children learned” (p. 87). In addition, she also reported that, based on her observations, the class that kept journals was more likely to think from one class to the next, to listen, and to follow directions.
As the observation period had ended, Niebur did not report if the third grade students continued to work on recorders or moved on to another unit. Therefore, it was difficult to evaluate if results from the summative video assessment were applied to instruction. However, it was clear that results from ongoing individual and group assessments were routinely applied by Ms. Martin while teaching recorder to her third grade students.
Based on data collected from four participant teachers, Niebur drew several conclusions that directly inform the current study. She stated: “…conditions that are favorable to group music making are not always conducive to individual assessment, so teachers who choose to track the learning of individual students often must adjust their teaching styles to accommodate assessing and recording individual student progress” (p. 145). In Chapter One, I detailed the myriad of difficulties teachers have reported regarding assessment of students’ progress in elementary general music, and nevertheless proposed that optimal instruction of elementary music would include tracking individual music learning progress.
Niebur reported, “[the participants] have taken on the challenge of seeking out, and often inventing, assessment tools with which they can attempt to create and share images of individual students’ musical growth” (p. 145). As a result of their course in measurement and their participation in this study, participants reported increased comfort with assessment tools that allowed them to track individual music learning progress without compromising the instruction and musicking the teachers desired in an elementary general music setting (p. 148). Participants voiced concerns that assessment might stifle creativity or result in children with low achievement or aptitude giving up on music. However, they also mentioned benefits of their increased use of assessment, such as increased ability to share information with other teachers, administrators, and parents, increased evidence of accountability for the music curriculum, and advocacy for general music education. Several illustrative comments included: “…other teachers think I’m more of a teacher…” (p. 148), “…when I mention to the kids that I’m checking for a certain skill, they sit up taller and try harder… [music class] is not just a place to relax for forty minutes.
It’s a class. We’re going to learn something” (p. 148) and “…I think some of my staff have changed their minds, too. It’s not just a planning period anymore” (p. 148). However, according to Niebur’s analysis, “…most often, assessment functioned as a means of illuminating for teacher and students the progress that they had worked so hard to achieve” (p. 152).
Summary. Research literature in music education frequently investigated various methods intended to assess achievement in elementary music education classrooms. However, few studies examined how these assessments could be used to differentiate instruction for individual learners. Extensive research in non-music elementary classrooms indicated that assessment-based differentiated instruction delivered in flexible groupings led to increased achievement. This review revealed only one quantitative study in music education that investigated assessment-based differentiation of instruction, and this study had promising results (Froseth, 1971). A handful of qualitative studies have approached this topic, but they focused on teacher and student attitudes regarding implementation of the assessment rather than on student achievement (Brummett, 1996; Niebur, 2001). Freed-Garrod described using formative assessments of small group work in combination with student self-assessments to guide instruction. In light of the research available, the current study seeks to describe how practicing teachers use the results of assessments to differentiate instruction in their elementary general music classrooms. The current study uses a qualitative design similar to that of Niebur (1997) and Howard (2007), in which examples of assessment and differentiation of instruction will be described in narrative form and analyzed for themes that might be informative to practicing elementary general music teachers.
Researcher Lens As an undergraduate, I pursued a double major in vocal performance and music therapy.
Although I ultimately decided not to complete board certification in music therapy, leading individual and small-group musical interactions in music therapy practica enriched my teaching when I eventually went back to school and became an educator. These early experiences also contributed directly to my interest in assessment. In music therapy, interventions are structured by a treatment plan that defines the therapeutic goals of each individual client, describes how music will be used to help the client reach each objective, and includes an assessment method to determine when the stated goal has been achieved. When I started teaching elementary general music, my early training in planning therapy sessions influenced my teaching, and I wanted to understand the needs of individual students and document their progress.
I taught in a typical school music setting: about 550 kindergarten through 4th grade students spread over two buildings, whom I saw twice a week for 30 minutes. In subsequent years, my teaching load became somewhat smaller (about 400 students per week), but in my four years of teaching elementary general music I did not engage in anything approximating systematic assessment of music learning as a natural part of instruction. My curriculum included assessments: I did “voice checks” twice a year, I gave some written tests regarding music theory and composers, and my recorder unit in 4th grade had strong assessment components. I also used my Orff background to help children improvise and compose, which allowed me to see individual response and informally assess musicality. Although I was trying to gather information about student achievement, I did not know much about the individual musical abilities of my students or how well they were learning what I was teaching. I did know quite a bit about many of my students—especially after 4 years in a town with a stable population.
However, most of what I knew about students was behavioral information, such as which students were typically easy (or difficult) to direct. I knew the children who were very strong or very weak rhythmically, or very strong or weak singers. However, there were a number of quiet, reserved children about whom I knew nothing, musically or otherwise. I also did not know if my low-performing students were struggling with music concepts, not trying in music for personal reasons, or bored students with high music aptitude who needed more engaging challenges.
Most important, I did not understand how to develop assessments that could inform my instruction. The “voice checks” that I did twice a year were the only time that every child I taught had an opportunity to give an individual musical response. Even when I had this opportunity to hear them sing individually, I simply marked U (Unsatisfactory), S (Satisfactory, could also have a – or a +), or O (outstanding). I did not have operational definitions or rubrics to define what those marks meant for any grade level, and they did not inform my teaching because they did not give me any information about the singing voice development of the student. All my records told me was whether or not a child could sing “Happy Birthday” on a given day.