«Item type text; Electronic Dissertation Authors Goertler, Senta Publisher The University of Arizona. Rights Copyright © is held by the author. ...»
3.8.1 Implementation of Chatting As discussed, the chatting was intended to be used for 20 minutes each week during the lab session in the collaborative lab. Most activities were designed for pairs or small groups, the ideal chat room size as suggested by Boehlke (2003). However, only the two teachers had been told of this requirement, and the lab assistant in the evening class sometimes made larger groups then had been intended.
The chat activities were supposed to have been chosen according to the current topic discussed in the class. MorningTeacher selected appropriate topics, EveningTeacher, however, did not always do so. Even though EveningTeacher introduced the topics well, she did not always introduce them as they were intended. This resulted in student confusion, as some of them actually understood the intended meaning and then were confused by the teacher’s explanation. The result was often that students were then not required to use the structure that was supposed to be practiced.
MorningTeacher on the other hand, did not introduce the activities, which also often resulted in student confusion. On these occasions, students asked each other what to do, and the teacher walked up to them when she saw them talking. She then explained the activity to the students individually in the physical space. MorningTeacher appeared to have set up the guidelines that when chatting was in progress there was to be as little oral communication in the physical space as possible. EveningTeacher, on the other hand, showed by example that talking and yelling across pods when chatting was acceptable behavior in her class, and even added music to the sessions.
Both teachers participated with the students in the chat. MorningTeacher chose to be invisible to the students most of the time, but used her real name when logged on.
EveningTeacher appeared to be visible all the time, though she logged in with her screenname. MorningTeacher appeared to be watching the students in the virtual and the physical environment, and responded to students with issues in both. EveningTeacher only walked around in the classroom at the beginning of the session when she introduced the activity. For the rest of the time, she sat at the teacher’s desk, usually typing and appeared to be typing more than the MorningTeacher.
3.8.2 Data Collection Several types of data were collected to better analyze the student and teacher behaviors exhibited, teacher participation styles, and resources utilized in physical and virtual contexts. In the background study, it was determined that some teachers choose to not participate at all, some focus on having an engaging conversation with the students, others only participate to provide corrections, and still others use a mixture of these. It was hypothesized that how the teacher interacts with the students during chatting will have an effect on their attitudes, and potentially on their language development. As mentioned, in the Ene et al. (2005) study, one of the teachers perceived her role to be the rule enforcer, and most of her turns were corrections, whereas the other teacher saw her role to be a conversation partner, and therefore her turns were often conversation starters or questions eliciting additional information. The first style appeared to have a silencing effect, discouraging students from participation. While this study discussed the role of the teacher, none of the studies addressed the teacher’s physical interaction with the students, or the students’ use of resources or each other in the physical environment. Therefore, different data sets were collected for different purposes, and several of them were analyzed in relation to each other to provide more insight into both the teacher’s role in CMC activities and issues of activity design.
220.127.116.11 Pre- and Post-Test A pre-and post-test (see Appendix 4 and 8) was administered to calculate gain scores. It was hypothesized that the gain scores in relation to feedback forms could reveal patterns of effective teacher interaction. Students took the pre-test, which was described to them as a placement exam, in the second week of the semester, and the post-test, or exit exam, during the last full week of the semester. Neither of the tests was a graded component of class. The tests were the same, except that the pre-test included a speaking section. In the post-test, the speaking section was eliminated in the interest of time. The written portion of the test could be completed in a 50-minute period, and focused on the structures taught during third-semester German classes, such as case endings, word order, reflexives, directions, and simple past tense. Other structures are also taught during this semester, but they are not the focus of instruction, and thus were not included.
The test was piloted and revised in classes of similar level of the main study. The activities were adapted from existing activities in the test bank that accompanies the course textbook. I decided to focus on the structures covered in these chapters in order to assess students’ progress over the period of one semester. Since the chat activities also focused on these structures, the goal of this portion of the research was to explore the relationship between feedback received and improvement on the test.
The pre-test consisted of five sections: a listening comprehension activity focusing on following directions, a reading comprehension section focusing on simple past tense, a grammar section including four activities (reflexive pronouns, accusative and dative case word order, word order in dependent and independent clauses, and article case endings with the gender provided), a writing section focusing on complex sentence structure and simple past tense, and a speaking component focusing on indirect questions and the subjunctive. The speaking portion was very intimidating to the students according to their own reports and took a great amount of time to administer, and was therefore eliminated from the post-test. All other parts remained the same (see Appendix 4 and 8).
The test was not a graded component of the course, and was only scored for research purposes by me. Interrater reliability was not established for the test, and rather a detailed scoring system was used to ensure objectivity. I scored the pre-tests, and compiled notes on the language used and the errors corrected (see scorecard in Appendix). The listening comprehension exercise was worth 10 points: one point for arriving at the correct destination, 1.5 points each for drawing the line exactly as it was described, and one point if the lines were correct in general. All mistakes were noted for later comparison and analysis.
The reading comprehension contained 10 questions, with each question valued at one point. Half a point was rewarded for correct information, and half a point was given if the answer was presented in the form of a complete sentence containing a correctly conjugated verb in the simple past tense form. On the scorecard, the correct and incorrect verb forms were noted, as well as the errors in reading comprehension for later comparison and analysis.
As discussed, the grammar section included four parts worth 10 points each for a total of 40 points. The reflexive pronoun section consisted of five reflexive pronoun phrases, allotting half a point each for the correct verb, the correct verb conjugation, the correct word order, and the correct reflexive pronoun. Again, errors and correct forms were noted on the scorecard for later comparison. The accusative and dative case word order section was made up of five sentences, each also worth a total of two points. In this case, one point was awarded for the correct word order of the accusative and dative object, and one point was awarded for the overall sentence word order. Sentence word order mistakes and their origin were noted on the scorecard. The section on word order in dependent and independent clauses was also made up of five sentences worth two points each, one for the correct placement of the coordinating or subordinating connector, and one for the correct placement of the conjugated verb. Unrelated errors may have also resulted in point deduction, such as unexpected word order mistakes. The type of mistake was noted on the scorecard for later analysis and comparison. The section on article case endings had thirteen article endings that were to be correctly assigned. One of them erroneously did not include the gender, and sentences four and five were asking for the same principle, one in plural and one in masculine. Again, for any mistake made, the type was noted on the scorecard for later comparison and analysis.
The writing section of the exam was worth 10 points: 5 points for content and organization of the essay, 5 for language structures (accuracy, complexity, vocabulary range and choice). The scores were used for later comparison and analysis.
The listening comprehension portion of the pre-test was only consulted for informational purposes, since not all students recorded an audible speech sample, and the speaking portion was eliminated for the post-test for reasons already mentioned.
18.104.22.168 Pre-and Post-Survey Pre- (Appendix 3) and post-surveys (Appendix 7) were administered (a) to measure changes in attitude, (b) to establish background experience with corrective feedback and technology, and (c) to learn about perceptions of the chat experience. The surveys were then analyzed to potentially identify an attitudinal change in relation to teacher style, as well as to compare perceived experience with actual events. The survey utilized a four-point Likert scale to ask the students to comment on their language learning, technology background, and attitudes about corrective feedback and technology.
The pre- and post-surveys varied slightly because, at the end of the semester, some questions became irrelevant while others became necessary substitutes (see appendix 11 combined survey).
Each survey contained four sections: a general information section, an agree/disagree Likert-scale section, a frequency Likert-scale section, and a guided questions section. Each Likert scale item was followed by a line encouraging students to explain their choice, a feature found valuable in Ene et al. (2005). Each section contained statements formulated positively and statements formulated negatively. For analysis purposes, the survey was divided into seven parts: general information about the person, information about prior experience with technology, information about prior experience with corrective feedback, information about attitudes towards technology, information about attitudes towards corrective feedback, information about the experience with chatting during the semester in question, and information about the experience with corrective feedback in the semester in question. Some of the questions could be applied to more than one category, and are thus discussed from those different perspectives. The pre- and the post-tests were not identical, as, for example, questions about the experience in the ongoing semester were not applicable at the beginning of the semester, and questions about prior experience were irrelevant at the end. For labeling purposes, the two surveys were combined into one numbering system (see appendix 11 for items and corresponding numbering). Furthermore, those items that were followed by an added explanation were labeled “a,” and the explanation itself was labeled “b.” The Likert scale items were assigned a number value with negatively formulated items assigned reverse numbering. In this way, an answer of strongly agree for a positively formulated item received a score of four, but for a negatively formulated item received a score of one. The surveys were then analyzed using ANOVAS to determine significant differences between teachers and courses.
The section regarding general student information asked participants for their screenname, class section number, age, gender, and native and foreign languages. It also asked students whether German was the first foreign language they had learned in a classroom or even in general. Students were then asked to describe their German language proficiency in their own words. These items of the first section were labeled items 1-6. This section was intended to receive general background information on the students. In cases where a student did not fit the general profile of the class, these factors were consulted to find a possible explanation. For example, one hypotheses was that older students would be less likely to have a positive attitude towards chatting.
The second set of items concerned the participants’ prior experience with technology. In items 27, 28, and 29, students were asked about their experiences using chatting for pleasure, business, and academic purposes, the latter pertaining specifically to the foreign language classroom. The open-ended question from the pre-survey, item 39, allowed the students to provide more information about their technological background if they so desired. One possible factor that was though as influencing students benefiting from the use of CMC, was their prior experience with technology, hence it was important to know their background experience.
The third section dealt with information about prior experience with corrective feedback. During the piloting phase of the survey, it was found that, when survey takers were only provided with a few statements on error correction, they tended to agree with all of them without differentiating between different forms of feedback. Items 30 through 35 asked students to report how frequently they received corrections in different kinds of contexts by their previous teachers. Here again, item 39 served as means to allow students to express more details on this topic if they so desired. Since corrective feedback was under investigation, one possible factor was their prior experience. It was hypothesized that students may become used to one form of feedback and prefer it.