A Review of the evidence on written marking

A marked decline? The EEF’s review of the evidence on written marking

A review by David Didau on the EFF report – some interesting view points but not a surprising conclusion…

Question: How important is it for teachers to provide written feedback on students’ work?

Answer: No one knows.

This is essentially the substance of the EEF’s long-awaited review on written marking.

The review begins with the following admission:

…the review found a striking disparity between the enormous amount of effort invested in marking books, and the very small number of robust studies that have been completed to date. While the evidence contains useful findings, it is simply not possible to provide definitive answers to all the questions teachers are rightly asking. [my emphasis]

But then they go and spoil it all by saying something stupid like:

Some findings do, however, emerge from the evidence that could aid school leaders and teachers aiming to create an effective, sustainable and time-efficient marking policy. These include that:

  • Careless mistakes should be marked differently to errors resulting from misunderstanding. The latter may be best addressed by providing hints or questions which lead pupils to underlying principles; the former by simply marking the mistake as incorrect, without giving the right answer
  • Awarding grades for every piece of work may reduce the impact of marking, particularly if pupils become preoccupied with grades at the expense of a consideration of teachers’ formative comments
  • The use of targets to make marking as specific and actionable as possible is likely to increase pupil progress
  • Pupils are unlikely to benefit from marking unless some time is set aside to enable pupils to consider and respond to marking
  • Some forms of marking, including acknowledgement marking, are unlikely to enhance pupil progress. A mantra might be that schools should mark less in terms of the number of pieces of work marked, but mark better.

The only one of these statement that can reasonably be concluded from the flimsy research base the review’s authors unearthed is the one finding that awarding grades seems to undermine the effects of written feedback. All the rest is speculation at best and unexamined, biased assumption at worst.

Let’s consider each claim in turn.

1. “Careless mistakes should be marked differently to errors resulting from misunderstanding.”

This, in and of itself, is probably correct. I find the distinction between ‘errors’ (misconceptions) and ‘mistakes’ (typos & slip-ups) pleasing. Clearly, giving detailed written feedback on something students already know is a waste of time. The problem is how to distinguish between something a students doesn’t know and something a student doesn’t do. I’ve seen reams of work in which capital letters are missing but have encountered almost no students in mainstream secondary schools who do not conceptually understand the use and purpose of a capital letter. The fact they don’t use them isn’t down to ignorance, but habit. They have practised writing without capital letters and have, consequently, become superb at it: they do it effortlessly. The advice offered in the review is that teachers should “simply mark the mistake as incorrect, without giving the right answer.” There’s just no evidence for this. The only way to undo to undo this habit is to make it more onerous for students to continue making the same mistake than not. I’ve found it useful to refuse to mark work which students haven’t proofread: failure to spot mistakes which I know they know need to result in some sort of consequence.

2. “Awarding grades for every piece of work may reduce the impact of marking” 

Lots of people are aware of Ruth Butler’s small-scale studies demonstrating the nugatory effects of grading, but these wouldn’t count for much on their own. Much more interesting is the research conducted in Sweden by Klapp et al.

During 12 years (1969 to 1982) Swedish municipalities decided themselves whether or not to grade their students and this natural setting makes it possible to investigate how grading affected students’ subsequent achievement. This natural setting caused some students in the 6th Grade in Sweden to obtain grades while others did not. This circumstance, in combination with the fact that a longitudinal cohort study included a large sample of students both with and without grades offers an opportunity to use a quasi-experimental longitudinal design in order to investigate how grades affect students’ later achievement.

The results are still somewhat equivocal,  but it seems pretty clear that although grades might be useful (or even essential) for some purposes, they do seem to undermine many children’s academic performance.

My advice would be that if you really need to grade a piece of work, don’t then undermine your efforts by also writing feedback. Conversely, if you’ve spent time writing feedback, it’s probably not a good idea to also grade the piece of work.

3. “The use of targets to make marking as specific and actionable as possible is likely to increase pupil progress”

As the report says, “Very few studies appear to focus specifically on the impact of writing targets on work.” Unfortunately, instead of simply acknowledging this deficit and moving on, the review’s author decide to extrapolate from research on other forms of feedback to draw their conclusions. In a review on the evidence of written marking this is odd to say the least. It’s definitely the case that findings from other areas of research suggest that further research is desirable, but how can we reasonably conclude anything more beyond suggesting that setting specific targets might be a good idea. Or it might not. This is just guesswork.

4. “Pupils are unlikely to benefit from marking unless some time is set aside to enable pupils to consider and respond to marking”

This is, I think, the most controversial of the review’s assertions. The only evidence which currently exists are students surveys of whether students like responding to feedback. Apparently they do, at least in Higher Education settings. Well, so what? Students like Calypso ice pops, watching The Next Step and Snap Chatting each other inappropriate pictures. What students like is hardly qualification for making education policy. And what HE students like tells us precious little about what school students need. Again, the conclusion drawn by the review ought to have been that it might be a good idea to encourage students to respond to feedback, but equally, it might not.

5. “Some forms of marking, including acknowledgement marking, are unlikely to enhance pupil progress”

Well, maybe. It might be the case that tick’n’flick has little impact on students’ progress, but there’s a possibility that it could provide much-needed motivation. Also, teachers receiving feedback from students may actually be more important students receiving feedback from teachers.  This marks a powerful change of perspective. John Hattie says in Visible Learning, “It was only when I discovered that feedback was most powerful when it is from the students to the teachers that I started to understand it better.” When we read students’ work we take feedback from them. We find out something about what they’re thinking. We shouldn’t be deceived into thinking that this is evidence of learning, but we should see it as useful information which gives us some indication about whether our teaching is having the effects we intend. Having taken feedback from our students, we are then in a better position to fine-tune our instruction, give whole class feedback on common errors and misconceptions, and talk to individuals about their work at quiet points in a lesson.

The only really useful findings the report has to offer is that “The quality of existing evidence focused specifically on written marking is low.” Without proper research we’re operating in the dark with guesswork and intuition. It could be that all the reviews recommendations are spot on. It could be the equivalent of encouraging teachers to use bloodletting to balance humours in their patients. We just don’t know enough to make reliable recommendations or draw meaningful conclusions. The authors are right to point this out as both surprising and concerning and the call for further study is welcome. The pages of speculation and guesswork are not.

Richard Farrow sums the situation up here:

This report is a living, breathing, example of why you should NEVER only read the executive summary. But that aside, the report has no evidence about anything useful (to do with written marking) and should never have been published. In fact, it could have been one paragraph saying the following: “we can’t find anything to look at so we are saying to the research community that they MUST research this. In the meantime we will not be publishing a report on this, just in case school leaders take it out of context.”

IF this report is quoted at you in your workplace to make you do something you feel is daft, say the following: “there is no evidence to back up any conclusions you draw from the report. We are still waiting for decent studies on this area of research.”

The post A marked decline? The EEF’s review of the evidence on written marking appeared first on David Didau: The Learning Spy.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s