«Item type text; Electronic Dissertation Authors Goertler, Senta Publisher The University of Arizona. Rights Copyright © is held by the author. ...»
The fourth group of items dealt with information about attitudes towards technology. During piloting, it was found that students were easily able to indicate their attitude towards technology, so only two items were included. Furthermore, the focus of this study is on corrective feedback, so the attitudes towards technology only played a minor role. Items 13 and 17 questioned the students’ attitude toward the use of technology in the language classroom, 17 being formulated negatively. During the piloting of the survey, the negatively formulated items were not problematic for the survey takers, although some of the participants in the study seemed to contradict themselves between items 13 and 17. Again understanding the students’ attitude towards technology may interfere with their ability to benefit from technology-enhanced teaching.
Furthermore, the different implementations and levels of support may have resulted in changes of attitudes.
The fifth group of questions dealt with students’ attitudes towards corrective feedback. I attempted to provide a variety of items that forced students to make a choice for preference between implicit and explicit forms of feedback by the teacher (items 11, 12, and 14). Students were also asked about their attitudes towards corrective feedback from peers (items 7 and 18). In total, there were eleven items asking students to express their attitude towards corrective feedback (7, 8, 9, 10, 11, 12, 14, 15, 16, 18, and 19). The students’ attitude towards corrective feedback may influence their willingness to receive a certain type of feedback, or their comfort level in the class.
The sixth group of questions asked students to describe their experience in chatting only during the semester this dissertation was conducted. Items 22 and 26 asked them to share their attitudes about chatting. Item 22 referred to chatting as fun, and 26 to chatting as beneficial for language learning. Items 21 and 25 asked students to share their own behavior during the chatting experience, with item 21 asking them about talking in the physical environment, and 25 about the noticing of errors by self or peers. In item 20, students were asked to express their attitude towards the teacher’s active participation in the chat room. Since it was observed that both teachers were participating with the students, it was appropriate to pose this statement as an attitude statement rather than a frequency statement. Items 36 through 38 asked students to describe their teacher’s role during chatting and whether they liked that behavior. In order to match students’ perception with actual practice this section was necessary.
The seventh group of questions dealt with the students’ experience with corrective feedback during the semester the study was conducted. Items 23 and 32 asked students to express their attitude towards corrective feedback from the teacher during chatting. In item 24, students were asked if they corrected other students during chatting. Item 25, also considered with group six above, asked students if they noticed their own mistakes during chatting. Items 30 and 31 asked students to report the frequency of corrective feedback expressed by the teacher in the classroom and during chat. And finally, items 36 through 38 were intended to provide any additional information the student wished to share. This section was intended to provide information to compare actual practices with perceived practices.
220.127.116.11 Transcripts Transcripts (see sample in Appendix 2) of the chat sessions were collected to be able to (a) code errors, (b) code teacher turns, (c) code feedback moves, (d) code instances of uptake, (e) code instances of error uptake, and (f) establish a word count. The information was then used to determine the effectiveness of error treatment, to provide quantitative data to prove or disprove the fear of error uptake, to establish patterns between feedback type and uptake as well as between error type and feedback type, and to learn more about the teacher’s role during chat. The chat software automatically generates chat transcripts, showing the name of the channel and the date of the interaction. Each posting is preceded by the screenname of the student and the time of posting including hour, minute, and second of posting. Participants logging in or out of the channel are noted by “XYZ entered” or “XYZ exited.” Chat transcripts were retrieved from the instructional lab server after each chat session. The chat transcripts, however, do not indicate key strokes that did not result in a posting. In this sense, the chat transcript shows the same information that the participants saw at the time of chatting.
The chat transcripts were used for several purposes: (1) to identify and categorize student errors; (2) to identify and categorize corrective feedback by self, peers, and the teacher; (3) to identify moments of correction uptake; (4) to identify moments of error uptake; (5) to establish the amount of student language production; (6) to compare the students’ perceived role of the teacher and the one exhibited in the transcripts; (7) to identify successful corrective feedback moves and their nature; (8) to correlate corrective feedback received with improvement on the pre-and post-tests to see if more teacher feedback results in more improvement; and (9) to identify the purpose of teacher turns.
In order to code the data, the transcripts were first transferred into Word Documents. The files (sample of processed transcripts in Appendix 12) were then labeled with the screenname of the subject, the course name, and the date of the chat session.
Each chat channel had two or more participants, so transcripts were created for each student from the time he/she logged on until he/she logged out. This means that transcripts from two students chatting in the same channel will be slightly different, as one may show more turns in the beginning and/or in the end. After transferring the transcripts into Word and labeling them with the name of the student, all turns by that student were highlighted in yellow and all turns by the teacher in green. Turns by peers remained on white background. This coloring system made coding easier and allowed for a quick visual indicator of the amount of teacher, student, and partner turns.
After this initial processing, language use was coded (sample of coded transcript on Appendix 13). First, the teacher turns were coded for their purpose. The categorizing and labeling of the teacher turns was taken from the students’ words in the survey. In the survey students were asked to label their teacher’s role. The words they used were then turned into the categories of teacher turn such as corrector, conversation starter, model, expansion question, task-keeper, conversationalist, and distracter. A table with examples for these teacher turn functions is provided below in table 3.5.
A correction turn is a turn in which the teacher corrects the students either implicitly or explicitly. A conversation starter turn is a turn in which the teacher, after observing that the students are not able to continue the conversation, gets the conversation started again through additional questions or comments. Modeling is when the students have difficulty understanding what they are supposed to do, and the teacher provides a model. Expansion question turns are when the teacher asks additional questions on the topic to allow the students to produce more language. Task-keeping turns are turns, in which the teacher reprimands the students for unacceptable behavior such as off-task conversations, rudeness, use of English, or inappropriate language.
Conversationalist turns are turns in which the teacher is simply being an equal participant in the conversation. Distractor turns are turns in which the teacher posts messages that are only partially or not all related to the topic.
Next, errors were coded. They were labeled according to the person making the error, i.e., self, partner, or teacher, and according to the type of error. During the analysis of the background study data, it was found that using traditional forms of error coding was not efficient for this research project; hence a new form of categorization was created, and errors were categorized from a teacher’s perspective (see Appendix 1 for list of errors and examples). These error categorizations were adapted from several available error-coding guides for essay grading, and it was assumed that teachers would view the errors in these categories. In contrast to grading an essay, typing errors including capitalization were ignored for the purposes of this study, since it was unclear whether such errors were truly errors or simply evidence of poor typing skills. The error categories consisted of endings, plural forms, word choices, subject verb agreements, verb forms, past participles, tenses, word order, missing words, unidentifiable errors, pronouns, prepositions, and unnecessary words. (A coding sheet with explanations and examples is included in the Appendix 1) Errors were also marked to indicate whether they received feedback or resulted in error uptake by another person.
Next, corrective feedback moves were labeled according to whom they were given by and whom they were given to: the teacher, the student (i.e., self), or a partner (if there was more than one they were numbered). If the corrective feedback moves resulted in uptake by the person corrected or correct usage by another student within the same transcript, they were marked accordingly. The corrective feedback moves identified were clarification request, modeling, rule explanation, marked correction, and repetitions (see table 3.6. for examples).
Table 3.6 Corrective Feedback
Word count was established by using the tool in Microsoft Word, and then manually subtracting English words, posting time information, and log-in and log-out information. The duration of the chat sessions in minutes and the number of words produced by the student per minute were recorded. In addition the number of words per minute by each student in each transcript was calculated. Averages for each student and each class were then calculated.
18.104.22.168 Analysis Sheets After each chat session, the teachers were supposed to print the analysis sheet described earlier. This sheet provides information on the student’s amount of language produced and the lexical density. However, most of the time, the teachers did not print or turn in the analysis sheet. Hence, word count was assessed using the electronic version of the transcripts and the word count feature of Microsoft Word as mentioned above.
22.214.171.124 Self- Report Forms As has been suggested by other researchers, transcripts alone do not provide enough information about what actually happens during a chat session. In an attempt to learn more about the factors in the implementations of the chat sessions and chat activities which are not reflected in the transcripts, self-report forms (see Appendix 9) were administered. After each chat session, students and instructors were asked to complete a short self-report form, available on the course management web site, about the resources they utilized and the difficulties they experienced. This web site was also where teachers and students accessed the chat activities descriptions. Yet, most of the time, the teachers did not complete these forms, and neither did all students. First they were asked to report their screenname and the date. The submission of the survey was set to be anonymous in the course management system, so as to not reveal the subjects’ real names. However, in order to be able to use the information provided in the self-report forms during the analysis of the chat transcripts, it was important to be able to match the self-report form with the chat date and the participant. In the next section, the students were asked if they used any resources, such as an online dictionary, internet translator, or paper dictionary, during chatting, and the reason why. The resources they were questioned about were resources that I had observed students using in the past. In the next section, students were asked about technological problems with the chat server, the necessary web sites, and the details of such problems. In addition, they were asked if there were any distractions in the lab during chatting. Examples of possible distractions experienced in earlier semesters vary: sometimes teachers play music in the lab, sometimes instructional technology staff walks through the lab to check or fix equipment, or sometimes other instructors come to the lab to prepare their next class. Then, students were asked about their interaction with other students in the physical and chat environments, particularly how they liked working with their partners. It was assumed that if they did not like working with their partners, they would produce less language.
They were also asked about their teacher’s participation and corrective feedback during chatting. Lastly, they were asked about the task, specifically if they liked it and how much time they had to complete it.
The information from the self-report form where available was used to help explain patterns occurring in chat transcripts that seemed unexplainable by looking at the transcripts alone. For example, if a student who usually produces a lot of language suddenly alters his/her production in one transcript, it may be due to the activity, the partner, technological problems, or a lack of time. Ina dition, self-report forms were used to triangulate data from the surveys and classroom observation notes.
The self-report forms were collected by the course management system and organized by question. Within each question, they were organized by time submitted. The program allows the survey results to be organized by date to make data analysis easier.