Carrying that can?

Word came to me this week via a roundabout route of a discussion between some upper sixth students. The gist of it was that they rated teachers who “taught them to be better people” higher than those who “just teach them to pass exams”. It gave me a real lift to hear that I was on their list. Now where’s my Student Voice questionnaire…?

I can imagine that such a conversation would give some of our masters a full-blown case of the heebie-jeebies. Here were real students expressing unsolicited views on which teaching approach they find the most inspiring – and they preferred just about the most  unmeasurable of indicators imaginable, while implying criticism of the line we (and they) have been fed for the past decade or more.

But professionals MUST be accountable (mustn’t they?) – otherwise we won’t know that they are using all those evening work-hours effectively, or ‘justifying’ the relatively modest amounts we pay them. The trouble is, how (and when) do you measure ‘being a better person’?

There presently seems to be a degree of debate abroad regarding the best way to assess teachers’ impacts; let us not deceive ourselves about that – the main reason for the target culture has very little to do with the recipients of education living fulfilled lives and very much to do with accounting for the beans that the State has lavished on the system, usually in ways devised by people who seem to have no concept of the more intangible aspects of life.

Like (I believe) many teachers, I have no issue with the concept of being held accountable for my actions: there have been two instances in the last couple of weeks alone where making an incorrect decision could have had potentially serious consequences for the individuals involved, and it was therefore important that correct procedures were followed.

But it becomes difficult where accountability is required inappropriately for the situation concerned – for example by holding people responsible for things over which they have little or no control. Even in those two recent cases, the decisions made involved more judgement than anything else, and in both of them (lacking the benefit of hindsight) all we could do was what seemed best at the time. How reasonable would it have been to have held people responsible had those decisions later proved ill-advised? And how much more difficult is it where neither procedures nor outcomes are clear or agreed?

High Causal Density dictates that this is actually the situation in the majority of inter-personal encounters (i.e. there are too many relevant factors ever to know them fully) – and educational situations fall well and truly within that.  In other words, in any situation where there is more than one decision-maker, holding one of them uncompromisingly to account for the actions of all others is simply not reasonable. Yet that is precisely what educational accountability has attempted to do, via beliefs that 100% of student outcomes were the direct result of teacher effectiveness.

Even the students know that this is not the case.

The quest for accountability has also conveniently downplayed to problems of proxy indicators. A number of commentators are increasingly reminding us of the obvious: learning is not a measurable phenomenon, except perhaps in the most mechanical of senses. (I can only ask, “What kept you?”)

Therefore we have to rely on proxy indicators such as exam results, lesson grading, pupil progress measures and student satisfaction, but it has been conveniently forgotten that all of these are just as subject to the vagaries of causal density as any other interaction, even assuming they are reasonable proxies in the first place. There are so many reasons why learning did or didn’t occur (always assuming you know what you mean by it) that to attribute it 100% to one cause is plain foolish, not to mention an injustice to any individual involved.

Reluctantly, I accept that this demand on us is not going to go away – so how could it be improved? If it is being realised that the usual measures actually don’t tell us much useful, then what should we do instead?

Given the likelihood that learning is not imminently about to become any more measurable than it ever was, I think the answer is to use precisely the same measures that we always have: exam results, lesson outcomes and pupil perceptions, amongst others. Perhaps even teacher-perceptions…? The real shift needs not to come in terms of what we use – but how we use it. The main principle is not to lose sight of the limitations of proxy measures, and to restrain the clamour for accountability (in other words, blame) that can cause this distortion. Further, we should ensure that this demand does not itself introduce more self-fulfilling distortions into the equation.

For example, exam results are not the same as learning – but they are a reasonable reflection of how that learning can be deployed in certain, very specific and highly artificial circumstances. There are very many factors that can affect a given exam outcome, and therefore results should only be used as an general guide to the effectiveness of the lessons that preceded them, or a pupil’s ‘potential’, rather than a cast-iron piece of evidence about specific causalities. What’s more, distorting this by using (questionable) targets to promote learning in advance is likely to backfire by modifying the behaviours of all those involved – as that overheard student conversation suggested. Exam results are nothing more than an academic temperature-taking exercise, only useful when used retrospectively and summatively, not prescriptively. The psychology of contingent rewards sees to that. This however requires more relaxed attitudes to the use of such data than we have recently seen – let alone the consequences that have sometimes been attached to it in the name of so-called accountability.

Similar things can be said of pupil progress and student voice measurements. In both cases, they can no doubt be useful – but only when taken with a very large pinch of salt, and a suitably wide remit when it comes to considering mitigating circumstances. Pupil progress needs to be defined in an appropriately loose way, even though that may mean the judgments become more subjective in nature. It also needs to use appropriately-calibrated time scales – of which 20 minutes is rarely likely to be one. If that group of students is to be believed, a whole lifetime might be more helpful. And we also need to remember that ‘progress’ can impair real learning, for example by substituting short-term knee-jerk responses for something more lasting, or by disposing teachers to move on too quickly for real understanding to take root.

More obviously, student voice feedback should take account of the myriad of personal circumstances and perceptions that may colour what students write or say in such situations. A recurring example is the use of ‘fun’ as a criterion – which, as a group of pupils sheepishly admitted to me this week, is often shorthand for something that avoids doing real work, and therefore hardly a reliable indicator of lesson quality. Pupils by definition do not have a full and mature understanding of the process that they are experiencing, and that should never be forgotten.

I don’t think we will ever experience in the U.K. the Finnish outlook that education is simply too intangible ever to be worth trying to measure; that would involve just too great a leap of culture for our quantitatively-driven masters who always Need Answers. Given that fact, we might accept that there is little wrong with the indicators that are currently used – and in some cases have been for decades. It is what we do with them that needs to change.

Advertisements

One thought on “Carrying that can?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s