Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Title 学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか�-ケース・スタディ
Author(s) Bradley, Michael
Citation 沖縄キリスト教短期大学紀要 = JOURNAL of OkinawaChristian Junior College(48): 25-36
Issue Date 2019-01-31
URL http://hdl.handle.net/20.500.12001/24666
Rights 沖縄キリスト教短期大学
沖縄キリスト教短期大学紀要第 48 号 (2019)
学期のあいだ、学生のライティングが上達したか、教員は見分ける
ことができるか -ケース・スタディ
Can Teachers Discern Improvement In Student Writing Over the Course of a Semester?- A Case Study
Michael Bradley
ABSTRACT
Eleven teachers were asked to assess the writing of ten students. They were shown two pieces of writing by each student
and had to decide which piece was better. Unbeknown to the teachers, the two compositions were students’ first and
last assignments in a Current Issues class. I hypothesized that students would have improved their writing skills over
the course of the semester and so teachers would likely choose the final assignment as the better piece of writing. In
fact they only did so around half of the time. However, those teachers who used grammar and word choice as their
primary assessment criteria, (rather than macro elements like overall structure and argument) did usually select the final
assignment as the better piece of writing.
INTRODUCTION
A few years ago this journal published my article about the efficacy of written corrective feedback
(Bradley 2016). Despite the fact that “feedback is widely seen in education as crucial for both
encouraging and consolidating learning,” (Hyland and Hyland 2006 P1) not all academics agree. One
in particular, Truscott, ruffled feathers by stating that written corrective feedback (WCF) was in fact
damaging to learners’ development. (1996, 1999)
My 2016 study compared two groups of students who took a writing class. The first group received
minimal feedback on their grammatical and lexical mistakes. In contrast, the second group received
significant feedback. I postulated that over the course of the semester, the writing of the second group
would show greater improvement than that of the initial cohort. The results however proved otherwise-it was the initial group who improved most when the texts were analysed for certain grammatical errors.
Furthermore, it was not even clear if the second group had improved at all in terms of grammatical
accuracy, during the semester. For the current paper I revisited the assignments written by this second
group. I wondered if by focussing on mistakes with certain grammatical items, I had perhaps been using
the wrong criteria to judge whether their writing had improved. Maybe I should have been using a more
holistic approach.
THE RESEARCH
I asked eleven native English teachers (both part time and full time) at Okinawa Christian University,
to assess two pieces of writing by ten students from my 2015 Current Issues Online class. (There were
actually 23 students in the class but I felt it would have been too onerous to ask my volunteers to read all
― 25 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
the essays.) I did not supply any criteria but simply asked the teachers to judge the pieces “holistically,”
or “impressionistically,” or “according to their readability.” The two pieces of writing in each case were
the initial and final assignments written by students in the course. For the initial assignment students had
to watch two videos about demographic change in Japan - the aging population and the stagnating birth rate - and answer three questions. Likewise, in the final assignment they had to answer three questions
after watching a video, this time about racism.
I did not tell the teachers why I was asking them to decide which of the two pieces of writing was
better, other than by saying it was part of my research. In case teachers might deduce my objectives,
I deliberately mislabelled the essays - the final assignments were labelled Number 1, and the initial assignments Number 2.
In the interests of anonymity, I have replaced each teacher’s name with a colour. Please note that
although two of the teachers were women, when discussing their comments I have used male pronouns
in every case, again in an effort to protect the identity of the respondents.
The results of the experiment are shown in the Figure 1 below. Each student is identified by a letter from
A to J. Each teacher is identified by a colour.
Figure 1. Teachers’ Choice of Students’ Best Essay - The Initial Or The Final.
If, collectively, the teachers generally thought that students’ final assignments were better than their
initial ones, then we might have expected the above table to be dominated by the word Final. It is not. In
fact teachers thought the final assignment was better in just over half of the 109 cases, (61 to be exact.)
Less than half (5) the teachers thought the final assignments were better in most cases. Indeed, four
thought the initial assignments were mostly better. Looking at the table, clearly there was no consensus
among the teachers. There was only one student, H, about whom they all agreed. (That student had
virtually no punctuation in the initial essay - no full stops and no capitalization at the beginning of sentences - but over the course of the semester she did learn to punctuate her writing.)At first glance then, it cannot be concluded that teachers thought that the final essays were generally
I did not tell the teachers why I was asking them to decide which of the two pieces of
writing was better, other than by saying it was part of my research. In case teachers
might deduce my objectives, I deliberately mislabelled the essays – the final
assignments were labelled Number 1, and the initial assignments Number 2.
In the interests of anonymity, I have replaced each teacher’s name with a colour. Please
note that although two of the teachers were women, when discussing their comments I
have used male pronouns in every case, again in an effort to protect the identity of the
respondents.
The results of the experiment are shown in the Figure 1 below. Each student is
identified by a letter from A to J. Each teacher is identified by a colour.
Figure 1. Teachers’ Choice of Students’ Best Essay – The Initial Or The Final.
S
TUD
EN
TS
TEACHERS
White Orange Yellow Red Blue Green Brown Purple Pink Black Grey
A Final Final Final Initial Initial Final Initial Final Final Final Initial
B Final Initial Final Initial Initial Final Final Final Final Final
C Initial Final Initial Initial Final Final Initial Final Initial Initial Initial
D Final Final Final Final Final Final Final Final Final Final Initial
E Initial Final Final Initial Initial Initial Final Final Final Final Initial
F Final Final Final Final Initial Initial Final Final Final Final Final
G Initial Initial Final Initial Initial Initial Initial Initial Initial Final Initial
H Final Final Final Final Final Final Final Final Final Final Final
I Initial Final Initial Final Initial Initial Initial Initial Final Final Initial
J Initial Initial Initial Initial Initial Initial Initial Final Initial Initial Initial
If, collectively, the teachers generally thought that students’ final assignments were
better than their initial ones, then we might have expected the above table to be
― 26 ―
Michael Bradley:学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか-ケース・スタディ
better than the initial ones. This is disappointing, as one would like to think that students’ work would
improve over the duration of any particular course.
However, it may be instructive to look at the criteria that each teacher was using to assess the
assignments. During my interviews with the teachers, I invited them to elaborate on their choices. I have
included a summary of their comments in the following section.
TEACHERS’ COMMENTS
Teacher White talked a lot about the “structure” of the essays, and of their being “coherent” and “ flowing
well”. At one point he explicitly said, “I’m giving more marks towards content and what they’re talking
about.” Later he remarked, “One thing I enjoyed was he (the writer) brought it into context at the end
talking about black people in America being shot by the police recently.”
In summary, it seems that Teacher White was most concerned with the macro aspects of the writing,
rather than with sentence level grammar or word choice.
Teacher Black did not mention grammar in his comments at all. Instead, he tended to look at how well
argued each answer was. He particularly liked when answers drew on the writer’s own experiences.
With regards Student A, he said answer one “had more passion. I’d go for that.” Even though he thought
B.2 was well written with a clear problem/solution structure, he picked B.1 because the topic, “Okinawan
culture, is more interesting to me.” He added that B.1 also included some personal experience, which
he liked. Indeed, this was also Teacher Black’s reason for picking C.2 - the writer was drawing on his or her “personal experience” of seeing some elderly people gambling on Kadena airbase. Teacher Black
also liked the writer’s solution, “bringing the two groups together, the old and the young.”
He also thought E.1 was stronger “as they gave really specific examples of what could be done,” while in
E.2 it felt like the writer had no “direct experience” of the topic. In summary, it seems that Teacher Black
was most concerned with the macro aspects of the writing, rather than with sentence level grammar or
word choice.
Teacher Grey didn’t mention grammar or word choice when discussing his choices. He talked mostly
about the overall structure and content of each piece of writing. Thus A.2 was chosen because it had “a
smoother line of reasoning.” Meanwhile, B.1 and F.1 were preferred because they had “more personal
reflection.” Whilst both C.1 and C.2 “have personal reflection”, he opted for the latter because it went “into
more detail.” This was also the reason he chose E.2. His reasons for choosing D.2 were less analytic,
“it amused me with the point that the Japanese should increase the birth-rate to stimulate the economy.
In D.1 nothing jumped out at me in quite the same way.” Teacher Grey praised the structure of G.2, “it
has a bit of an introduction, defining the problem and then going on to talk about the solution….it has
a nice logical feel.” Similarly, I.2 was chosen because it was “organised into coherent paragraphs.” In
summary, it seems that Teacher Grey was most concerned with the macro aspects of the writing, rather
than with sentence level grammar or word choice.
There was nothing in Teacher Yellow’s comments to indicate that he judged the assignments primarily
― 27 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
according to their grammaticality. He commented a number of times on the length of the answers. He
seemed to prefer longer answers as they contained more information. In one instance he explained,
“In number 2 they have written a lot more….they have lots of examples to support their opinions.”
Similarly, in another place he says, “Number 2 is much shorter. The writer has expressed themselves
slightly more clearly (than in 1) ….but unfortunately they haven’t given further information.” Teacher
Yellow thought it important that students should give examples, especially those drawn from personal
experience, to support their arguments.
At another point, he talked about the quality of “the vocabulary and expressions.” Although for him
grammar and word choice were not the most important criteria. “G.2 has better English but I think
G.1 is the better piece of writing.” He elaborated that, “G.1 has expressed some of their own ideas
interestingly.” In summary, it seems that Teacher Yellow was most concerned with the macro aspects of
the writing, rather than with sentence level grammar or word choice.
Teacher Red rated a student’s writing highly when it referred to his or her own experiences. He said
the inclusion of personal stories made it “easier for the reader to connect with.” At one stage he said,
“I choose the second piece. It’s more interesting…the English is maybe more flawed but I get an idea
of what the person was thinking, what they were trying to say.” Teacher Red did not completely ignore
the students’ use of grammar - he chose H.1 because “there are fewer mistakes and the person knows
how to punctuate better” - but generally, grammar does not appear to be the most important criteria for
him. This is exemplified in his comments about student B, “I choose 1 - it’s a short piece of writing but the English is better. But the real reason is that it’s personalised….which always makes it interesting to
read.” Likewise, with C.2, “despite the mistakes … the ideas come through. Whereas in the first one the
mistakes stop your understanding.” In another case Teacher Red was having difficulty choosing between
I.1 and I.2 but opted for 1, “but maybe that’s just because I find racism a more interesting topic than
Japan’s aging population.” One assignment that was poorly organised clearly irritated Teacher Red. He
said it did not “answer the question - didn’t do what it set out to do.” In summary, it seems that Teacher Red was more concerned with content rather than form in the assignments he read. However, in a few
cases the form, the grammar, was so poor that he couldn’t discern what the content was supposed to be.
In such cases, he did judge the writing according to its grammatical accuracy.
Teacher Blue’s criteria were “a good interplay of detail, a particular statement that supports a more
general view of things and also the personal experience…of lesser consideration is the grammatical
element. Understanding would be the next criterion. So, three criteria.” He chose A.2 because it gave
“more attention to the root causes, to cause and effect.” Similarly, he thought B.2 was better because
“even though it speaks in generalities, it gives a more complex picture.” However, Teacher Blue was
not insensible to poor grammar. Of student C’s assignments he says, “both (1 and 2) have grammatical
problems and some word choice problems.” Ultimately he opted for C.1 because the “writer understands
the concept of racism.” He felt C.2 “was more fluid and there were more words used and different
words however, I don’t think the writer had an appreciation of the topic itself, which was the ageing
population.” With regards student D, the number of grammatical errors did seem to be the deciding
― 28 ―
Michael Bradley:学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか-ケース・スタディ
factor. “D.1 is written better, there are fewer grammatical errors and so it’s comprehensible. The
second one is not comprehensible.” When it came to student E, he preferred 1 because of its “personal
approach.” The student, it seemed, had brought the situation in Okinawa into his discussion about
racism. Teacher Blue admired F.2, even though it was shorter than F.1, because it was, “logical…it gives
several solutions to the problem of decreasing birth-rate.” Similarly, he chose H.1 because “it’s more
developed. The ideas are mentioned and then the writer goes on to give specific examples.” The second
essay he said lacked “that interplay of the general and the particular that I enjoy.” In summary, it seems
that Teacher Blue was more concerned with structure and content rather than form in the assignments
he read. However, in a few cases the form, the grammar, was so poor that he couldn’t discern what the
content was supposed to be. In such cases, he did judge the assignment according to its grammatical
accuracy.
Teacher Green’s answers were relatively short which made it a little difficult to determine what his most
important criteria were but in any case it didn’t appear to be grammar. The inclusion of supporting facts
was something he valued. In the case of Student A, he chose 1 because “it’s not so opinionated…..there
is a little bit more information rather than opinion.” Likewise, he chose B.2 because B.1 was “kind of
anecdotal” whereas in B.2 “they say the statement and then they back it up with information.” Likewise,
D.1 “has more facts.” Teacher Green liked that both G.2 and E.2 posed “a question and answer it.” As
did H.1, which also connected the argument “to real life.” The only time he seemed to use different
criteria was in the case of Student C where he preferred 1 because “it’s easier to follow in terms of
grammar.” In summary, it seems that Teacher Green was more concerned with structure and content
rather than form in the assignments he read.
Teacher Brown says that he is normally a “stickler for grammar but grammar is not the end all.” He
thought C.1 was “clearer…it’s the better piece of writing” but ultimately he chose C.2 because it was
“more touching….I was quite moved.” Similarly, he liked A.2 because the “writer expands on what
they’re thinking.” When it came to student E, he felt 1 was “more explicit” and the writer knew more
about the topic than in 2. He praised G.2 and H.1 for both finding “a solution to the problem.” However,
in other cases Teacher Brown was more focused on the grammar. He disliked B.2 because “there were
a lot of grammar mistakes.” In summary, it seems that Teacher Brown was more concerned with overall
structure and content rather than form in the assignments he read.
Teacher Pink apparently used more than just one set of criteria when judging the essays, “A.1 is better,
the grammar seems a bit more complete, it flows more - the argument seems to flow better.” Similarly with regards student B, he felt both assignments had “good content…and make good points,” but in the
end he opted for 1 “because from a language point of view it feels much tighter….there are a few more
mistakes in the second one.” With regards student C, he felt neither was a “great piece of writing” and
it was the reasoning that swayed him, “the second is much better in terms of content and argument.”
Likewise, E.1 won out over E.2 because it had “more concrete examples.” He thought F.1 was better
on many fronts, “more complete, more thorough, better English, better expression.” As was H.1, which
― 29 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
he described as being “a bit more logical, a bit more coherent” than H.2, and in addition “the English
is better.” In summary, it seems that Teacher Pink took into account overall structure, content, sentence
level grammar, and word choice when judging the assignments.
Teacher Orange said at the outset that he was mostly influenced by, “grammar or spelling errors.” In
the case of Student C he said, “1 is slightly better than 2. I’m just focusing on some minor grammatical
problems. You know the lack of capitalization starting some of the sentences.” He said that in general he
found “minor errors, grammatical errors, spelling errors… distracting.” However, Teacher Orange wasn’t
entirely insensible to content and praised F.1 because the writer had “thought more deeply about the
issue” compared to F.2. Similarly he commented that Student E had “clearly thought about the questions,
sincerely, thoughtfully and I’m attracted to that in both.” In summary, it seems that Teacher Orange was
more concerned with sentence level grammar than with overall structure and content, when it came to
judging the assignments.
Teacher Purple also said he attached a lot of importance to good grammar. He chose A.1 as it had “more
complex sentence structures….the verb usage was more correct, article usage was more correct.” He
gave the same reasons for choosing C.1, E.1, F.1, G.2, I.1. He apparently experienced a dilemma when
it came to student B as essay 1 had “better English” but 2 was longer and had a better structure, “they
have an introduction, supporting information, and a conclusion.” In the end, he went with B.1 because it
contained fewer grammar mistakes. In another instance, he praised H.1 for its use of subordinate clauses.
In summary, it seems that Teacher Purple was more concerned with sentence level grammar than with
overall structure and content, when it came to judging the assignments.
INTERPRETATION
We can see from the above comments that teachers had various criteria for assessing the written
assignments. Broadly speaking we can divide them into groups. One group were most interested in
the overall structure, (e.g. how students set out the problem and offered solutions,) the quality of the
argument, and how writers related the issues to their own experiences. The second group were more
focused on grammar and lexical mistakes. Only two teachers, Teacher Orange and Teacher Purple,
clearly fall into the latter category, but we could also include Teacher Pink as he perhaps considers
language to be at least as important as structure and argument etc.
If we look at the scores of these three teachers - Teacher Orange, Teacher Purple and Teacher Pink - a clear
pattern emerges,which can be seen in Figure2. For one thing, they are in complete agreement about
seven of the students. Of those seven, they think that in six cases, the final piece of writing was better.
With regards the three students about whom there was no consensus, in two cases, two of the three
teachers opted for the final piece of writing.
― 30 ―
Michael Bradley:学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか-ケース・スタディ
Figure 2. Three Teachers’ Choice of Students’ Best Essay
Collectively, in eight of ten cases, the three teachers considered the final piece of writing to be superior,
largely because the grammar and word choice were better. On the face of it, this is exactly the result one
would expect, considering the kind of feedback I gave students during the course.
FEEDBACK TO STUDENTS
It might be apposite to explain here a little more about the support I gave students in 2015. It took
the form of what Ferris terms “indirect feedback” (Hyland and Hyland 2006 P 83,) i.e. a series of
abbreviations that could be understood by referring to a key, which was given to all students. When
I read the first draft of a student assignment if, for example, I came across an error involving article
use, I would write the abbreviation ART after the mistake. (Although I hadn’t known it when I devised
my system, Ferris et al used a very similar key in one of their studies (2013.)) When I finished the
corrections, the annotated first draft would be returned to the student who would then consult their key
to try to understand their mistakes. As McDonough and Shaw say, the ultimate aim of such an approach
is to encourage students to take responsibility for “self - monitoring and self-correction.” (2003 P167) The students had to rewrite their assignments, correcting (or attempting to correct) the highlighted
mistakes. They were incentivised to do so because only the corrected versions were graded.
Of course, in many cases students were not successful in their attempts to correct mistakes. From my
If we look at the scores of these three teachers, Teacher Orange, Teacher Purple and
Teacher Pink, a clear pattern emerges. For one thing, they are in complete agreement
about seven of the students. Of those seven, they think that in six cases, the final piece
of writing was better. With regards the three students about whom there was no
consensus, in two cases, two of the three teachers opted for the final piece of writing.
Figure 2. Three Teachers’ Choice of Students’ Best Essay
ST
UD
ENT
S
Teacher
Purple
Teacher
Orange
Teacher
Pink
A Final Final Final
B Final Final Final
C Final Final Initial
D Final Final Final
E Final Final Final
F Final Final Final
G Initial Initial Initial
H Final Final Final
I Initial Final Final
J Final Initial Initial
― 31 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
point of view the important thing was that they tried. Even if they failed to correct a given mistake, the
hope was that by thinking about that particular word or piece of grammar, they would be more receptive
to taking on board the correct rule at some point in the future. As Lightbown and Spada point out, “If the
error is based on a developmental pattern, the correction may only be useful when the learner is ready for
it.” (1999 P 167.) It should be noted that the pieces of writing shown to the teachers for this experiment
were the uncorrected versions, that is to say, they were the first versions submitted by the students.
It is gratifying that three teachers discerned an improvement in the students’ writing, in terms of
grammar and word choice, over the course of the semester, given that so much of my feedback was
directed towards these aspects. Indeed this finding accords with the results of a 2006 study where 55
pairs of student papers (essay 1 and essay 4) were compared. It was found that students made “statistically
significantly reductions in their total number of errors…over the course of the semester” (Ferris 2006
P90.)
MY 2015 RESEARCH
However, such a conclusion was at odds with my own previous research. In my 2016 paper, I wrote that
there was little discernible improvement in the students’ grammatical accuracy over the semester, as
can be seen From Figure 3. Indeed it was this conclusion that led me to undertake the current research
- if students had not improved their grammar, I argued, perhaps they had improved in some other, less tangible way which teachers would intuitively be able to spot. However as we have seen, none of the
teachers who judged the student writing according to non - grammatical criteria, discerned any strong pattern of improvement across the semester.
9
1
8
0 1 0 1 1
2
1
8
5
4
2
5
12
1 0
13
0
10
8
5 5
7
0
7
0 1 1 1 0
4
1
7
11
14
1
6
5
0 1
6
1
6
10
7
12
1 0
6
1 0 0 0 0 1 0 0
5
1 0 0
4
0 0
3
0
3
4
1
4
0
2
4
6
8
10
12
14
16
Figure 3: CURRENT ISSUES ONLINE 2015: How Students' Accuracy Changed From Week 3 To Week 13, Across A Range Of Items
Number of students whose performance improved with the given item
Number of students whose performance worsened with the given item 0
Number of students whose performance didn't change with the given item
― 32 ―
Michael Bradley:学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか-ケース・スタディ
Perhaps this unexpected result arises from a flaw in the methodology employed in my 2015 research.
Arguably, all of the WCF I provided was useful for students who genuinely wanted to improve their
writing. However, in terms of measuring improvement across the semester, some of the feedback simply
muddied the waters. For example, as can be seen in Figure 3, I sometimes asked students to rewrite
a sentence or phrase. In some cases I did this because I simply couldn’t understand what the student
was trying to say. (In other cases, it was because the mistake did not fit into any of the other correction
categories.) In any case, 14 students received more of this feedback in their final assignment than in
their first, while only four students had less. So at first glance, it looks like the writing of those 14
students deteriorated over the semester. Of course, this is not necessarily true; the instruction to Rewrite
the Phrase (RP) was used to cover a wide range of errors. Hypothetically, a diligent student who did
not repeat any of the mistakes from previous assignments may still have found the feedback for their
final assignment covered with RP abbreviations because they had made different kinds of mistakes. In
other words, as a tool for measuring progress the abbreviation RP was completely useless, (but it was
nevertheless a useful tool for helping students to write better.) Similarly, there was no reason to expect
fewer Wrong Word (WW) abbreviations in the corrected final assignments, compared to the first. The
chances are that the mistakes were different in both assignments. This also holds true for Remove a Word
(RW), Spelling (SP) or Wrong Word Order (WWO) corrections.
If we removed these categories from Figure 3 and focused on a number of more discrete grammar items,
as in Figure 4, it would be seen that there was a discernible, if modest, improvement across the semester.
The most pronounced improvements were in regard to tense usage and words omitted from sentences.
Clearly in these two instances at least, students were becoming more careful in their writing. (The
glaring exceptions to this improving trend were with plurals (PL) and the passive (P). I was initially at
a loss to explain the former, until I went back and looked at the original data. I discovered that a single
student had eight PL mistakes highlighted in her final assignment (most students had around two),
compared to none in her first assignment. In such a small sample, this one outlier clearly had skewed the
results. With regards the small increase in the number of mistakes with the passive, I can only comment
that it is not entirely surprising that students at this level, low intermediate - intermediate, had not fully mastered such a complicated piece of grammar.)
― 33 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
CONCLUSION
The research I wrote about in 2016 was prompted by Truscott’s assertions regarding WCF “Substantial
research shows it to be ineffective and none shows it to be helpful in any interesting sense…it has
harmful effects.” (1996) Like most teachers I knew, I had been giving students WCF as a matter of
course over many years. I naturally bridled at Truscott’s dismissal of the practice and set out to prove
him wrong. This turned out to be a little difficult as seven of the ten teachers who took part in my
blinded research seemed to support his hypothesis: they had difficulty identifying which assignments
were written at the start and which at the end of the course. In other words, they detected little obvious
improvement between the first and final essays, even though the students had received copious amounts
of WCF throughout. However, when I examined these seven teachers’ comments more closely, I realised
that they were, by their own admission, not judging the assignments primarily according to grammar
or word choice - the elements which my WCF focussed on. Rather these teachers were interested in the more macro elements of writing - overall structure, how arguments were set up and answered, how students drew on their own experience. I am not implying that these professionals did anything wrong.
Indeed Hyland and Hyland’s research shows that teachers in a university pre-sessional E.F.L. course paid
most attention to students’ ideas, “with slightly less focus on language issues.” (2006, P216) However as I,
CONCLUSION
The research I wrote about in 2016 was prompted by Truscott’s assertions regarding
WCF “Substantial research shows it to be ineffective and none shows it to be helpful in
any interesting sense…it has harmful effects.” (1996) Like most teachers I knew, I had
been giving students WCF as a matter of course over many years. I naturally bridled at
Truscott’s dismissal of the practice and set out to prove him wrong. This turned out to
0
2
4
6
8
10
12
14
Figure 4. CURRENT ISSUES ONLINE 2015: How students' accuracy changed from Week 3 to Week 13 across a limited
range of grammatical items
Number of students whose performance improved with the given item
Number of students whose performance worsened with the given item
Number of students whose performance didn't change with the given item
― 34 ―
Michael Bradley:学期のあいだ、学生のライティングが上達したか、教員は見分けることができるか-ケース・スタディ
rightly or wrongly, had not given the students any guidance on the macro aspects of writing, there is no
reason to suppose that they should have improved in this regard over the course of the semester. (Unless
that is, one supports Truscott’s view that students improve naturally, without any kind of feedback.)
On the other hand, I hoped that the students’ grammar and word choice would show signs of
improvement because of the WCF they had received. And apparently it did, at least according to the
three teachers for whom such things were of primary importance. Their findings prompted me to revisit
the data I had collected for my 2016 research. At first glance it didn’t reveal any pattern of improvement
but when certain variables were discounted, it could be seen that certain aspects of the students’ grammar
had got better, in particular their use of tense and using the correct part of speech. Generally, they
seemed to be taking more care with their writing by the end of the course.
Why had I focused almost exclusively on grammar (at the expense of more macro concerns) in my
WCF? Well for one thing, it seemed that many students expect teachers to correct their grammatical mistakes. (Leki, 2001, P20, Hyland and Hyland, 2006, P218) For another thing, I intuitively felt that
it would serve lower intermediate students better if they first concentrated on sentence level problems
before moving on to issues with paragraphs and overall structure. I couldn’t find any explicit research
to support my hunch but perhaps Hyland felt similarly when he wrote that in terms of WCF, “the part
played by prior experience, proficiency level (my italics), or various aspects of affect, for example, are
clearly worthy of further study.” (2002, P198)
Of course, my research probably would not stand up to rigorous scientific analysis. For one thing,
the number of participants was too small to generalize the results. For another thing, the key was not
designed to measure progress across a semester, it was designed to help the students improve their
writing. When I began this research, if I had been thinking purely in terms of proving an hypothesis, the
experiment could have been much better designed. I could, for example, have focussed on a very limited
number of easily identifiable grammar mistakes, which would have been simple to tabulate and hence
compare in the first and final assignments. However, such an approach would have given precedence to
my research ahead of the students’ needs, something which most teachers I’m sure would be reluctant to
do.
References
Bradley, M. (2016) in Journal of Okinawa Christian Junior College. Vol. 44 pp101-119. Japan: Okinawa
Christian Junior CollegeFerris, D. (2006) in Hyland, K. and Hyland, F. Feedback in Second Language Writing. pp81-104.
Cambridge: CUP.Ferris, D., Liu, H., Sinha, A., and Senna, M. (2013) in Journal of Second Language Writing. Vol.22
pp307-329. Paris: Elsevier.Hyland, K. (2002) Teaching and Researching Writing. London: Longman.
Hyland, K. and Hyland, F. (2006) in Hyland, K. and Hyland, F. Feedback in Second Language Writing.
pp1-19. Cambridge: CUP.Hyland, K. and Hyland, F. (2006) in Hyland, K. and Hyland, F. Feedback in Second Language Writing.
― 35 ―
沖縄キリスト教短期大学紀要第 48 号 (2019)
pp206-224. Cambridge: CUP.Leki, I. (2001) in Silva T. and Matsuda P.K. On Second Language Writing. New Jersey: Lawrence
Erlbaum AssociatesLightbown, P.M. and Spada, N. (1999) How Languages Are Learned. Oxford: OUP.
McDonough, J. and Shaw, C. (2003) Materials and Methods in ELT Oxford: Blackwell.
Truscott, J. (1996) in Language Learning Vol.46:2 pp327-122. Paris: Elsevier.
Truscott, J. (1999) in Journal of Second Language Writing. Vol.B:2 pp111-122. Paris: Elsevier.
― 36 ―