Teaching for distinction

A really interesting blog from Tom Sherrington about teaching for distinction:

Teaching for Distinction @OldhamCollege

The most exciting job I’ve had since starting out with Teacherhead Consulting has been working with Oldham College.  Principal Alun Francis approached me to explore whether there was scope in applying current thinking around teaching and learning, curriculum planning and my experience of the delivery of CPD in schools to the FE setting.   He was keen to move away from the one-off CPD day where the impact can be marginal.  He was also keen to explore the idea of a ‘powerful knowledge’ curriculum in FE, not least because so many technical and vocational courses are moving to include examined components.

Working with Alun and Rachel Irving, the Head of Teaching and Learning,  I spent a few days in Oldham talking to a range of members of staff – tutors, Heads of Faculty, members of the senior leadership team.  I observed a few lessons and got a feel for the context.  FE is radically different to school in some ways:  there are huge plumbing workshops, rows of painting and decorating booths, hair and beauty salons, design studios.  Oldham even has the front half of an aeroplane parked outside. Some students are engaged on 100% work-based learning programmes where they receive visits from assessors.  The Maths and English departments are dealing with over 1000 students all taking the same GCSE resit course.   As well as the scale, the language is different: it’s all about learners, tutors and ‘quals’.

But, the basic business of teaching students so that they succeed is the same. Students need structure, guidance, support, quality instruction, high expectations, feedback, chances to improve.  The also need to acquire knowledge.  Teaching for Distinction has a clear double meaning. We want students to reach Distinction in their BTECs and other qualifications; we also want tutors to teach with distinction, using evidence-informed wisdom about effective practice to design and deliver a successful curriculum for all learners.

We designed the programme around some core texts – Doug Lemov’s Teach Like a Champion, which the college was already using,  and the excellent Didau/Rose Pyschology book, which captures so much of the research evidence in an accessible format.  We’ve also used some firm favourite resources for CPD such as the Austin’s Butterfly video, the Learning Scientists six strategies,  the Rosenshine Principles of Instruction and the Tharby/Allison Making Every Lesson Count flow diagram.

 

 

I will report back on our progress, but here is an outline of what we’re doing:  There are six teaching and learning modules:

Slide1

The programme is designed so that it follows best practice, blending external input from me with regular CPD sessions every fortnight or so in between.  This will allow each faculty to design its own tailored programme so that the common learning is interpreted in the context of the needs of learners in specific technical disciplines.  So far we’ve planned up to the end of February 2018 but it will continue beyond that:

Slide2

Part of the programme has been training for faculty leaders on running effective CPD sessions.  We’ve borrowed the structure from Dylan Wiliam’s ideas about teacher learning communities:

Slide3

Thanks To give you a flavour of the content, here are some of the unit outlines and the headline course overview:

Screen Shot 2017-06-25 at 19.09.50Screen Shot 2017-06-25 at 19.10.22

Screen Shot 2017-06-29 at 09.51.31.pngScreen Shot 2017-06-25 at 19.10.39

I’d like to say a huge thank you to Alun, Rachel, Nick, Roger and all the other members of the Oldham College team who have made me feel so welcome. The first sessions for faculty leaders went really well and I’m very excited about returning to deliver the first round of training for staff.

If you work in FE and would like to talk to me about working with you, please get in touch.

Here is the infographic produced for us by Oliver Caviglioli. Modules 1-3 infographic v2

Screen Shot 2017-07-07 at 06.59.11

Three Assessment Butterflies – A review from Michaela

 

Winston Churchill once said that ‘success is stumbling from failure to failure without losing enthusiasm.’

Looking back now on assessment in our first year at Michaela, I can now see what I was blind to then: we stumbled and blundered. What mistakes did we make, and how did we stumble?

We spent hours marking. We spent ages inputting data. And we didn’t design assessments cumulatively.

  1. Marking

First mistake: we spent exorbitant amounts of time in the first year marking, in particular marking English and History essays and paragraphs. We wrote comments, we set targets, we tried individualised icons, we corrected misspellings, we corrected grammatical errors, we judged and scored written accuracy, we wrote and shared rubrics with pupils. We spent hours every week on this. Over the year, we must have spent hundreds of hours on it.

The hidden pitfall of marking is opportunity cost. Every hour that a teacher spends marking is an hour they can’t spend on renewable resourcing: resourcing that endures for years. Marking a book is useful for one pupil once only: creating a knowledge organiser is useful for every pupil (and every teacher) that ever uses it at again. Marking is a hornet. Hornets are high-effort, low-impact; butterflies are high-impact, low-effort. Knowledge organisers are a butterfly; marking is a hornet. We had been blind to just how badly the hornet’s nest of marking was stinging us. So we cut marking altogether and now no longer mark at all.

  1. Data

Our second mistake: we spent far too much time in the first few years on data input. We typed in multiple scores for pupils that we didn’t use. Preoccupied by progress, we thought we needed as many numbers as we could get our hands on. But the simplistic equation of ‘more data, better progress’ didn’t hold up under scrutiny. Every teacher typed in multiple scores for each assessment, which were then collated so we could analyse the breakdowns. We were deluged in data, but thirsting for insight. There was far too much data to possibly act on. My muddled thinking left us mired in mediocrity, and we had invested 100s of hours for little long-term impact.

What we realised is this: data must serve teachers, rather than teachers serving data. Our axiom now is that we must only collect data that we use. There’s no point in drowning in data, or killing ourselves to input data that we don’t use.

  1. Design

Our third mistake was this: we had forgotten about forgetting. We designed end-of-unit assessments that tested what pupils had only just learned, and then congratulated ourselves when they did well whilst it was very fresh in the memory. We had pupils write essays just after they had finished the unit. We coached them to superb performances – but they were performances that they would not be able to repeat on that text in English or that period of History even a few weeks later. Certainly, months later, they wouldn’t stand a chance. Just as if you asked me to retake for Physics GCSE tomorrow, I would flunk it badly, so just one year on, our pupils would flunk the exact assessment that they had aced one year earlier.

Looking back with hindsight, these three mistakes – on marking, data and design – helped us realise our two great blind spots in assessment: workload and memory. We didn’t design our assessments with pupils’ memory and teachers’ workload in mind.

We were creating unnecessary and unhelpful workload for teachers that prevented them focusing on what matters most. Marking and data were meant to improve teaching and assessment, but assessment and teaching and had ended up being inhibited by them.

We were forgetting just how much our pupils were forgetting. Forgetting is a huge problem amongst pupils and a huge blind spot in teaching. If pupils have forgotten the Shakespeare play they were studying last year, can they really be said to have learned it properly? What if they can’t remember the causes or course of the war they studied last year in history? Learning is for nothing if it’s all forgotten.

 

The Battle of the Bridge

Assessment is the bridge between teaching and learning. There’s always a teaching-learning gap. Just because we’ve taught it, it doesn’t mean pupils have learned it. The best teachers close the teaching-learning gap so that their pupils learn – and remember rather than forget – what they are being taught. We’ve found the idea of assessment as a bridge to be a useful analogy for curriculum and exam design. Once you see assessment as a bridge, you can begin to ask new questions that generate new insights: what principles in teaching are equivalent to the laws of physics that underpin the engineering and construction of the bridge? How can we design and create a bridge that is built to endure? How can we create an assessment model that bridges the teaching-learning gap?

We’ve found 3 assessment solutions that have exciting potential. Here are the reasons I’m excited about them:

They have absolutely no cost.

They are low-effort for staff to create.

They have high impact on pupils’ learning.

They are not tech-dependent at all.

They are based on decades of scientific research.

They can be immediately implemented by any teacher on Monday morning.

They have stood the test of time at Michaela over the last three years.

I anticipate we’ll still be using them in three, six and even ten years’ time, and beyond.

In short: no cost, low effort, high impact, research-based, long-term solutions.

 

Three of the most effective assessment tools we’ve found for closing the teaching-learning gap are daily recaps, weekly quizzes and knowledge exams.

Over 100 years of scientific research evidence suggests that the testing effect has powerful impact on remembering and forgetting. If pupils are to remember and learn what we teach them in the subject curriculum, assessment must be cumulative and revisit curriculum content. The teaching-learning gap gets worse if pupils forget what they’ve learned. As cognitive science has shown, ‘if nothing has been retained in long-term memory, nothing has been learned’. Assessment, by ensuring pupils revisit what they’re learning, can help ensure they remember it.

Pupils forget very swiftly. We use daily recaps, weekly quizzes and biannual knowledge exams to boost pupils’ long-term memory retention and prevent forgetting.

 

  1. Daily recaps

Daily recaps are a butterfly: low-effort, high-impact. Departments create recap questions for every single lesson. Every single lesson starts with a recap. They are easy to resource. They consolidate pupils’ learning so they don’t forget. Every day they spend up to 20 minutes in each lesson applying what they’ve learned before. In English, for example, we spend those 20 minutes on grammar recaps, spelling recaps, vocabulary recaps, literature recaps (with questions on characters, themes, plots, devices and context). We do recaps on the unit they have been studying over the last few weeks. We do recaps on the previous unit and previous year’s units. This daily habit builds very strong retention and motivation: pupils feel motivated because they see how much they are remembering and how much more they are learning than ever before. All recaps are open questions, and weaker forms might be given clues. The recaps are always written; they are no-stakes, without any data being collected; they give instant feedback, as they are swiftly marked, corrected and improved by pupils themselves. We’ve ask pupils after: ‘hands up who got 4 out of 5? Hands up who got 5 out of 5, 100%?’ Pupils achieving 100% feel successful and motivated to work hard to revise.

 

  1. Weekly Quizzes

Weekly quizzes are a butterfly: low-effort on workload, high-impact on learning. Departments create quiz questions for every week in the school year. Every week there is a quiz in every subject. They are easy to resource. They challenge and test pupils’ understanding. They are mastery tests, where most pupils should be able to achieve a strong result.

We have dramatically, decisively simplified how teachers score them. Instead of marking every single question laboriously, teachers simply sort them into piles. They make swift judgement calls about whether each pupil’s quiz is a pass, excellent, or fail. Each judgement is a simple scan of the pupil’s quiz paper and a decision as to which of the three piles it should be in. Accuracy isn’t perfect, but nor does it need to be: there are diminishing returns to perfecting accuracy.

The data is then inputted in 30 seconds into a beautifully simple tracker. Any pupil failing often is red-flagged, so teachers can focus in lessons on pupils who are struggling. And that is the only data point that our teachers have to keep in mind: which pupils are struggling most?

 

  1. Knowledge Exams

Knowledge exams are another butterfly – high impact, low effort. What I love about our knowledge exams is that they are cumulative, so that pupils revise and remember what they’ve learned. We have exam weeks twice yearly, in January and July (not half-termly). We set GCSE-style exams for depth, and we set knowledge exams to test a much fuller breadth of the knowledge pupils have learned. Knowledge exams are 35-question exams that take 60 minutes to complete. They are beautifully simple: they are organised onto 1 sheet of A4 paper, and they can be answered by pupils on one double-sided piece of A4. The breadth we can achieve with these exams is staggering. By Year 9, we have 3 knowledge exams in History, Religion, Science and English alone; they organise 35 questions on what pupils learned in Year 7 and 35 questions on what pupils learned in Year 8, centred on those years’ knowledge organisers. Twice a year, pupils are challenged to revise and remember what they’ve learned over all the years they spent in secondary school. This means they answer 12 knowledge exams – over 400 questions in total across 4 subjects. I am willing to bet that many of our teachers could not beat even our Year 7 pupils on these exams across all subjects! Imagine more than 24 sides of A4 packed with answers from every pupil in the school. The humble knowledge exam is a great catcher of knowledge.

As for marking them? We simply sort them into three piles: excellent, pass and fail. We don’t even record the marks. Teachers just note the names of pupils who failed multiple knowledge exams so we know who’s struggled.

Knowledge exams solve the breadth-depth tradeoff in exams. They give pupils maximum practice with minimum marking burden on teachers.

Simplicity must cut through assessment complexity. We should practise what we preach on cognitive overload for teachers as well as pupils. Assessment resources must be renewable, replicable, sustainable, scalable, enduring, long-term.

And the impact of recaps, quizzes and knowledge exams? We’ve had several (very weak) Y8 or Y9 pupils miss an entire term though unavoidable long-term illness, only to return fully remembering what they’ve been taught the previous term and previous year. It’s an early indicator that the assessment strategy is bridging the teaching-learning gap and overcoming the savage forgetting curve. The real test of its impact will be GCSE results in 2019, A-level results in 2021 and University access and graduation beyond.

Blind, still

The two blind spots we’ve discovered – memory and workload – provide us with ways of interrogating our teaching and assessment practice:

  • How much are pupils remembering?
  • Where are they forgetting?
  • Where are teachers overloaded?

And I still think that we at Michaela can do more and find better ways of creating assessments with memory and workload in mind. I’m sure our pupils are not yet remembering as much as we’d like them to. I had a conversation with Jonny Porter, our Head of Humanities, just this week, about ramping up the previous-unit daily recaps we do. In this sense, even at Michaela we still feel blind on the blind spot of memory – pupils are still forgetting some of what we are teaching, and we want them to remember what they are learning for the very long-term. Our ambition is that they have learned what we’ve taught for years to come: for five, ten, twenty years.

Every day, teachers and pupils at Michaela see Churchill’s words on the wall: ‘success is never final; failure never fatal; it’s the courage that counts.’ It takes courage to radically simplify assessment – and courage to continually confront our workload and memory blind spots.

Knowledge based scheme of work

The final piece (number 3 but search the blog for the others) about how to use summative assessment in a knowledge based curriculum by Robert Peal:

Planning a knowledge-based scheme of work. Part 3: Summative Assessment

Like many teachers, I have spent the last week marking end of year exams for Key Stage 3. Having put some thought into the design of these exams, I have – perhaps for the first time – found this to be an instructive and, dare I say it, enjoyable process.

For the sake of this blog post, I am going to focus on our Year 8 exam, covering Early Modern Britain and the Age of Encounters. All our KS3 assessments share a similar format, and you can view them here, with examples from Year 7, Year 8 and Year 9.

In the past, I have struggled to find a satisfactory format for end of year exams, falling back on the unimaginative (and unhelpful) practice of mirroring GCSE examinations. Reading Daisy Christodoulou’s Making Good Progress, and talking to colleagues at the Historical Association Conference in May, helped me narrow my focus. At WLFS, the construct we want to assess in KS3 history boils down to three outcomes (four in the case of Year 9). Do pupils have:

  1. an accurate chronological framework of the period studied?
  2. a broad knowledge of the period studied?
  3. the ability to construct well-evidenced historical arguments?
  4. the ability to comment on the purpose and usefulness of historical sources? (Year 9)

Our end of year exams now mirror those outcomes. At Year 7 and 8, the exam consists of three sections:

  • Section 1: Chronology test /5 marks.
  • Section 2: Multiple choice quiz /20 marks
  • Section 3: Essay /25 marks

Section 1: Chronology test

The chronology test for Year 8 involved linking 10 events with 10 dates. The events were chosen from a list of 25 dates included in the pupils’ revision guide, spanning from 1453 to 1721. We didn’t expect pupils to memorise all of the dates listed. But if they had a good understanding of the historical narrative, and knew some of the most important dates (such as 1588 and 1688), then they would – we hoped – be able to piece together the correct answer.

Pupils gained half a mark per correct answer. As a test item, the chronology test tended towards bifurcation: in all, 46% scored 5 out of 5, but with another a large percentage clumped towards the bottom end. Next year, we need to do more to ensure a strong chronological understanding amongst all our pupils. Perhaps our pupils should memorise all 25 dates?

Section 2: Multiple choice quiz

This quizzing portion of the exam has been designed to assess the whole domain of the Year 8 curriculum, in a way that the essay question could not.

In Making Good Progress, Daisy recommends using multiple choice questions for formative assessment. Though a good idea in principle, I have found MCQs too time-consuming to create, and too cumbersome to mark, on an ongoing basis. However, for our summative end of year exam, the investment in creating and MCQs was time well spent.

Once pupils completed their exams, our department entered all of the pupil answers into a question-level analysis spreadsheet (see here), so that we could see which questions pupils struggled with, and which questions pupils breezed through. Daisy suggests this is useful for highlighting pupil misconceptions, which it was. But I did wonder whether the varying success rates for different questions was more dependent on the design of the question, rather than the quality of pupil understanding.

MCQ question level analysis

For example, this was the most challenging question for our pupils.

4. Which Catholic martyr did Henry VIII execute for refusing to give up his religion?
a. Thomas More
b. Thomas Wolsey
c. Thomas Cromwell
d. Thomas Cranmer
Success rate: 39%

The low success rate clearly has a lot to do with the proximity of the distractors: parents at the end of the fifteenth-century really liked the name ‘Thomas’.

Question 4 tested an item of declarative knowledge, but the next most challenging question for our Year 8 pupils probed their understanding on a more conceptual level, in the way that Christodoulou argues MCQs are well equipped to do.

20. How was the power of Georgian Kings further limited by ‘Parliamentary government’?
a. The king was not allowed to be a Catholic
b. The king could only choose ministers who had the support of Parliament
c. The king could not start wars without Parliament’s permission
d. Parliament had the power to appoint and dismiss the king
Success rate: 44%

The question hinged on the word ‘further’, and required pupils to discriminate between the outcomes of the Glorious Revolution in 1688, and the outcomes of the development of Parliamentary Government under George I. At the other end of the scale, the question pupils found easiest did surprise me.

17. What title was Oliver Cromwell given to rule England in 1653?
a. Lord Protector
b. King
c. Prime Minister
d. Lord Chancellor
Success rate: 98%

I thought I was on to something, as many pupils had written about Cromwell as ‘King’ during the year. But by the time of the exam, not a single pupil chose that distractor. Three did choose ‘Lord Chancellor’. Again, the question with the second highest success rate was not one I thought particularly easy when writing it:

18. What did the Bill of Rights do?
a. secured the legal rights of Parliament and limited the monarch’s power
b. banned the monarchy, and establishing England as a Commonwealth
c. gave equal political rights to all people in England
d. united England and Scotland into a single Kingdom
Success rate: 94%

But, our analysis shows this question was simply too easy, and the distractors too dissimilar. Perhaps the most helpful outcome of this question level analysis has been to hone in on which questions worked well, and which did not – allowing us to refine the writing of the MCQs in years to come.

Section 3: Essay

Lastly, we set a mini-essay, with clear instructions that pupils were to write three paragraphs: two sides of an argument and a conclusion. With around half an hour to complete the essay, this seemed like a reasonable demand.

Throughout the year, our Year 8 pupils wrote five essays. They were on Henry VIII and the Reformation; The Age of Encounters; the Later Tudors; the English Civil War; and the late Stuarts/early Georgians. We chose two essays questions from these five units, based on the same enquiry as the earlier essay question. We were not interested in tripping up pupils with fiendishly difficult questions. Rather, we wanted straightforward essay questions that gave pupils the best chance of marshalling their knowledge to support a reasoned historical argument. The two questions pupils had the choice of answering were:

  1. ‘The invention of the Printing Press was the most important event that took place in Early Modern Europe.’ To what extent do you agree with this statement?
  2. ‘Charles I only had himself to blame for his execution in 1649’. To what extent do you agree with this statement?

To mark the essays, individual teachers grouped them according to whether they were A* to E in quality. We then met as a department, saw how consistent the judgements were, made some adjustments, and assigned a numerical mark out of 25 to each script. It was a low tech version of comparative judgement, which seemed to work pretty well.

The correlation between pupil outcomes in the multiple choice questions, with pupil outcomes in the essay, was 0.7. Most helpfully, this highlights for our department those pupils who understanding what we study, but still struggle with written work.

Correlation

Next year, we will use a selection of this year’s essays as exemplification material for each grade band, replacing the need for a mark scheme.

So that you can see how WLFS pupils are getting on with a knowledge-based curriculum, here are the Year 8 exemplification essays we will use. Each grade band contains three exemplar essays. Having been written under timed conditions, during exam week, on an unknown question, the quality of did take a dip compared with the essays pupils have written throughout the year. However, I was still pleased with the way in which pupils were able to organise their knowledge into convincing historical arguments.

There is still much to work on (particularly on the explicit teaching of different lines of argument – more to follow), but I am happy that our KS3 curriculum is now equipping pupils with a deep well of powerful knowledge to inform their historical thinking.

A star-grade exemplars

A-grade exemplars

B-grade exemplars

C-grade exemplars