I don’t normally comment on specific classroom practice, partly because there is already so much out there about it, and partly because I tend to be sceptical about broad claims made for or from any one person’s experiences. I’m also not sure that pupils should be used as guinea-pigs, though I suppose the research has to be done somewhere…
But I thought I’d bend the rule to discuss one particular experience from recent weeks, inasmuch as it may shed just a little light on a wider current debate. I’m certainly not claiming any originality for the technique – it’s hardly complex – and nor am I going to draw any hard-and-fast conclusions from what is after all one very small sample. I’m sure more sceptical readers will see many flaws in what I am about to describe, to say nothing of the philosophy underpinning it.
There seems to be a growing torrent of people questioning more and more of the erstwhile non-negotiable wisdoms of recent years, and this week it seems to be the turn of AfL (Assessment for Learning). As several writers point out, this has been a central plank of the drive to quantify learning, and as those writers are now also pointing out, for example here and here, the premises upon such things are based seem shakier the harder you look at them.
As David Didau (a.k.a. Learning Spy) observes, AfL is a particularly sacred cow to slaughter. That may be so, but I have had my reservations about it all along. It strikes me that the main reason we ‘need’ (want?) to be able to measure learning has nothing to do with the process itself, and once again everything to do with validating and accounting for our professional systems. I’m not saying it isn’t helpful to know whether pupils are actually learning stuff, but the notion of being able to control and measure that process accurately enough to direct one’s planning lesson by lesson always struck me as daft – unless you are obsessively worried that those outcomes might not be ‘good enough’. As Didau and others are now pointing out, not only is learning invisible, but the relationship between it and the teaching process is not the mechanistic, directly causal one that AfL presupposes. The only perplexing thing about all this is the expressions of apparent surprise and disappointment that this may be so. I think this whole edifice was just another case of wishful thinking on the part of the educational engineers.
So I offer the following experience as a small contribution to the debate. The activity concerned was most definitely not conducted with formal AfL in mind; in fact it took place before I became aware of this debate at all. All I was curious to know – in the roundest, most subjective terms possible – was what my pupils had retained in their memories from a sequence of about five or six lessons. I also wanted to create an opportunity to discuss with them the matter of personal attitudes and responsibility for learning.
It just so happens that my findings may be of some relevance.
For the first time in a number of years I have used the very simple technique of providing pupils with an A3 sheet of paper, and on the screen at the front a template with selected prompts from a recently-completed topic. All they have to do is write down as much as they can – purely from retained knowledge – about the various aspects of the topic just learned. I do not give pupils any warning of this as I want them to write without the boost of prior cramming, just to see precisely what may remain in their longer-term memories. As a sop, I allow them after some thirty minutes to choose to consult their exercise books – on the provisos that they change writing colour and that once the book is open it can’t be closed again. (Quite a few chose not to). They are also told that ‘book knowledge’ scores credit at half the rate of ‘brain knowledge’. I actually look largely at the first section.
From a marking point of view, it is relatively straightforward, with L4 constituting reasonable description, L5 starting to offer explanation through to emerging analysis at L7 – all fairly broad-brush stuff, as it inevitably is in humanities. I have now done this task with a number of different classes from years seven to nine. Note however, that at no point did I mention targets, levels or expected outcomes to the pupils – I just left them to write unhindered by input from me…
What was almost more interesting than the actual results was the pupils’ initial reactions to the task. One should bear in mind that our ability range is positively skewed, but that many pupils perhaps have an excessive sense of entitlement resulting in sometimes-complacent attitudes to their work and high expectations of what teachers will do for them.
Many reactions were along the lines of “you can’t expect us to know all that!” and protests that they had been given no notice. These were somewhat allayed when I explained why we were doing the exercise and precisely how it would work. But there was still quite a lot of incredulity that I was expecting them actually to know significant amounts unaided. As planned, however, this provided ample opportunity for later discussions about why that was, and the implications for effort levels. It was also significant to note that most of the hardest-working pupils offered little protest and generally just got stuck in.
I was fairly hard-hearted with all requests to provide answers or significant further guidance; however, I did point out that getting started was often the hardest part, and they should just wade in with one of the questions they felt they could answer – hence the diagrammatic format. Indeed, once they started, it was gratifying to note that many pupils were indeed able to write quite a lot, often more than they expected. This in turn provided opportunities for positive feedback.
Conversely, there was a significant minority who were able to write very little; no prizes for guessing what the general attitudinal profile of those pupils tends to be. I’m afraid I left them to struggle and this later provided a useful opportunity to point out to the class as a whole the consequences of not making an effort in lessons. Who knows whether this will have a long-term benefit…
On marking the work, it was indeed pleasing to note good levels of knowledge, though less ability to explain than describe; that is of course consistent with cognitive development, but it makes me wonder whether some of these skills are less developed than they sometimes appear in more structured tasks. It was notable, however, that most of the grades were lower than the official module assessment, conducted about a week previously, which explicitly instructed pupils what they had to write to score a given level and gave them more material resources to draw on. What’s more, that task was done with minimal protest.
I think it may be indicative that taking away teachers’ ‘scaffolding’ of pupil tasks and asking them to rely purely on their own resources both elicited a very different reaction from the students and also had an impact on their apparent attainment. I will leave readers to decide for themselves which they think is the more representative and educationally-sound practice, but I know what I think. It may also be worth reiterating an emerging argument that emphasising ‘progress’ may even come at the expense of genuine learning.
A further clear distinction was that many less-able pupils made comments along the lines of “I remember doing it, I understood it at the time but I can’t remember it now” and “I have a problem remembering things”. This is more difficult to draw conclusions from; it has certainly made me reflect on whether I am doing enough to help such pupils retain what we cover – but also whether it is in fact a reasonable or necessary expectation that they will do so. Clearly it’s necessary for formal examinations, but they seemed to be referring to limitations that are not subject-related (such as, “I didn’t understand it”) but more fundamentally cognitive issues. How much is it really possible to do about that, and how accountable should we therefore be for the outcomes?
I also think the reaction of the pupils to being expected actually to have retained any knowledge long-term was interesting. One always needs to allow for youthful hyperbole of course, but it makes me wonder whether this might show something about younger pupils’ attitudes to and understanding of the acquisition of long-term knowledge, versus short-term recital for the purposes of demonstrating ‘progress’ and hitting NC levels. Their appreciation that learning was about anything more than a transient focus on something seemed surprisingly weak. This may be why the demands of G.C.S.E. and ‘A’ level hit many perfectly capable pupils hard – perhaps they simply aren’t being prepared for the long-term retention of knowledge, a task-skill that must surely take years. If this is the case, that lack of knowledge may also be hindering their ability to develop more complex critical skills.
If so, in real educational terms this expression of Learned Helplessness is probably doing far more damage than any inability accurately to measure what learning does go on. And what’s more, collectively it’s our fault.
What’s concerning about the questions now being asked about AfL is that the proposed solutions seem to involve tweaking systems, making them more complicated still, or even inventing completely new ones. Why we can’t just accept the fact that learning is a qualitative, oblique, almost unmeasurable process and stop worrying, I really don’t know. Formal exams will be a good enough proxy test – in the fullness of time. If it’s good enough for the Finns (and the Swiss) then it’s good enough for me.