The Intelligent Fridge

Once upon a time, a technologist in need of some extra cash decided to develop the intelligent fridge. S/he reckoned a device that could work out what was inside it and communicate this directly to the supermarket would be a sure seller. All the technology existed to make it a reality.

And so development began. An analysis of the problem identified the key elements required of the technology. Each item would require a chip or barcode so that the fridge could identify it; the bar code already existed. The scanning technology existed to make this possible – until the technologist realised that users would need to scan each item as they took it out and replaced it, something that would be less than 100% reliable.

No matter, taking inspiration from airport x-ray scanners, the technologist devised a system where the fridge could scan the items within it at regular intervals – once it had worked out how to ‘see’round items that were obscuring others. This was Progress. But another issue presented itself: while the fridge could now identify its contents, it could not ascertain their state of emptiness. How was it possible to know how much milk was left inside a bottle, or how many olives inside a jar? And given the varying consumption rates of different items, how could it work out when things actually needed replacing? Maybe the fridge needed to learn the item-by-item use-rates of its users.

Undeterred, the technologist deployed the weighing technology used in supermarket self-checkouts. The fridge would contain weight-sensitive pads that could sense small changes in the contents of the containers. There was only one problem: it meant that each item needed a distinct allocated place within the fridge and relied on the user to replace each item accurately each time.

No matter, the technologist reasoned, surely it is possible to combine these technologies so that the fridge recognises products and weighs each no matter where it is positioned. But this meant the installation of highly complex weighing systems that were not confused by the varying size of products or the infinity of subtle variations in the positions of those items.

The costs of manufacturing the technology started to soar; No problem, the technologist thought, time and mass-production will reduce these costs. A few prototypes were built and put into homes for trial. They were an instant flop.

The technologist conducted user-surveys to find out why the fridges were not meeting needs. They were identifying, counting and weighing successfully, and the guinea-pigs had not had to pay the now vastly-inflated price of the fridges. Why were people not going home to find the supermarkets delivering precisely what their fridges knew they wanted?

The fridges, it seemed were over-ordering, and people were being inundated. The technologist had resorted to using sell-by dates to work out what needed replacing, ignorant of the fact that these were often set by supermarkets to increase consumption rather than as a true indication of perishabillity.

The technologist built in a date-bias to compensate – only to find out that some goods were now perishing and turning the fridges green. Further adjustments were made, but still the users were finding that the fridge could not make the correct decisions. More consultation ensued.

The answer came as a bolt from the blue. As one user put it: “I don’t want the same stuff every week – and sometimes I change my mind at short notice. My fridge can’t know that”. A second user added, “The increased hassle outweighs the benefits”.

From a conversation had last week, the above may go some way to explaining why intelligent fridges have yet to make a major impact on our homes.

I have been reading a number of blog posts written by people who have attended various educational seminars and conferences that have taken place recently. As usual, I’m afraid they both leave me cold, and make me wonder what on earth they really have to do with teaching specific young people. Far be it from me to say that such conferences should not exist, nor that teachers should not use their personal time attending them – maybe the distinctly unscientific ‘inspiration factor’ is their real raison d’être… but I do wonder whether the outcomes really add much to the day-to-day job of teaching real people.

Parts of the profession show no sign of ending their faith that the future of education lies in the technical fix. I have not been to such conferences, so I accept that my view may be inaccurate – but I do take the trouble to read people’s reports, and they make me no more inclined to attend. As the fridge story shows, no amount of ‘science’ can overcome the simple fact that people are not logical, predictable (learning) machines but are unreliable, quixotic animals the reasons behind whose real needs no rational, technology-based system will ever fully fathom.

The only way to make it work will be to turn people themselves into machines – and that is something I will resist at every turn. Instead, I think I will continue to develop my understanding of the human species and its needs by my much-enjoyed pastime of people-watching. Come to think of it, maybe that’s where the real benefit of those conferences could lie after all…

Advertisements

You Take the High Road…

the path forks

The end of a year in which the contradictions within education became even more apparent – as did the inequalities between the paths one can take…

The balance is shifting towards traditional teaching.

Though my instinct has always been for traditional techniques, years of exposure to progressive doctrine had their effect, especially while one’s perceived success as a teacher palpably hung on its adoption. But things have begun to change: most importantly, a coherent rationale is emerging for traditional approaches. This is important because it counters the claim from progressives that traditionalism is little more than the confirmation-bias of a bunch of luddites.

But whether it will translate to anything more substantial in schools remains to be seen. From my own experience, the progressive message has gone distinctly quiet, but the alternatives are hardly being given coverage.

My own determination to adopt a more traditional approach was sustained. I am not claiming unequivocal success: as with all outcomes in education, it’s not as simple as that. But despite the difficulties encountered with pupils whose expectations were clearly of something else entirely, I can cautiously say that plenty did start to exhibit (and expect) more formal educational behaviours.

We need a clearer path for classroom teachers

One of the problems with traditional teaching has been the lack of career progression. Once one had mastered one’s classroom, there was little left to do except gradually turn into Mr. Chips – hardly a mark of success in a career-obsessed world. This, fundamentally, is the reason for the growth of Management – it provides a more acceptable and defined career path for teachers. But in doing so, it removes people from the core business.

Many of my teacher-friends in Switzerland exhibit little desire to take the management route: they seem happy developing their academic and pedagogical skills, and this seems far more acceptable than it is in Britain. I suspect that the flatter management structures and the relative lack of career snobbery make it easier. My closest friend in particular seemed perfectly happy until his recent retirement (and despite his doctorate) to develop his personal practice without the need for hierarchical validation; he is not alone.

In the U.K., remaining in the classroom is still seen as a dead-end that is becoming increasingly unattractive due to the growing pressure on classroom teachers from elsewhere. We need a more appealing second route – and it needs its own type of performance criteria.

Despite initiatives such as Advanced Skills Teachers, it is not easy to pin down good teaching in ways that make it short-term accountable – or rewardable – in a system dependent on tick-box criteria. But it may not be necessary either. So long as teachers’ incomes are not significantly eroded, people who follow this path may be less concerned about hierarchical prestige or financial reward in the first place. What is more important is preserving the autonomy for them to teach as they need.

It is quite possible for teachers to ‘plateau’ once they have mastered their classroom – but I increasingly think this is not the end of the matter. My reading over the past couple of years has yielded many insights into behavioural and philosophical matters that have enriched my understanding of what I do, materially influenced my professional behaviour and increased the effectiveness with which I respond to my pupils.

Little of this is outwardly observable, let alone box-tickable, and little of it needs to be implemented in an unremitting, doctrinaire way. It is more a matter of the person one becomes – and the ways in which this informs one’s personal practice. There is a pleasing solidity to the inner knowledge that, at last, one has reached a degree of professional depth and resilience that endures, no matter what ‘the system’ throws at you.

So just at a time when the future appeared to promise only ‘more of the same’, through the clouds new heights have become visible – and maybe therein lies a way to develop a more profound definition of what it means to be a classroom professional. It needs to become more possible and acceptable for people to pursue this route – and this means providing the means for development equal to those available to managers.

But…

You can’t go down both paths.

A vacancy arose for Head of Department, and at long last I felt confident that I could do the job and address the specific issues. But it became clear that I am too far down the Mr. Chips path and the role went to a young chap a couple of years in. I am sure he will learn (steeply) – but I doubt the wisdom of closing off such roles to those with the insight of years; time was when many heads of department were in the latter stages of their careers.

Maybe I am a late developer – but I know things now that would make for more considered decision-making, and the implementation of far sounder educational practices than when I was younger. I think it was the unformed awareness of this that prevented me from making a more convincing case for promotion in my own early years. But external appearances count – even though, as Kahneman observes, brassy confidence may simply betray lacking awareness of the limits of the possible. It seems as though one must choose at a stage of one’s career when these greater truths are still invisible.

There is still only one route open to the success-hungry teacher – and it leads away from the classroom. What is more, those left behind are ever more closely controlled by people who took it. By taking the path labelled ‘management’ one starts dining at entirely different tables – and one’s diet becomes that of effective management rather than effective teaching; they are not necessarily the same thing, even if those in charge seem to think otherwise.  Thereafter, developing further as a teacher is either taken for granted – or of limited interest. Clearly, management is needed – but why is the path of pedagogy allowed to peter out in a thicket, while that of management leads on to ever richer pastures?

How will this lead to better education in the future?

TP will be taking its customary break over the summer; no doubt issues will arise that require comment – but normal service will resume in September.

Smacks of inconsistency

Back in the days of yore, when such things were still permissible, I might have wanted to investigate the intervention effect of smacking children. (This is not about the rights and wrongs of corporal punishment; I am using this rather politically-incorrect example simply because the relationship between cause and effect might be reasonably – though not perfectly – observable).

It would not be sufficient for me to ‘know’ (read ‘believe’) from experience that smacking children did indeed largely cause them to cry: in order to justify the intervention, I would need research-based evidence.

So I could set up a controlled study to investigate. I would need to establish that the smacks being administered were totally uniform in nature and circumstance (smacking machine, anyone?) so that variations in the strength of the smack could be factored out of the results. If I could quantify the strength of the smack (also difficult) I could then observe how many children did indeed cry as a result of a certain smack. But could I be absolutely sure that the crying was the result of the smack and not something coincidental and entirely unrelated?

I will ignore this possibility for the sake of pursuing a simple argument. I could conduct this research for as long as I felt necessary to collect a representative data set, after which I could perhaps arrive at something approaching an effect size. We might observe, for instance, that 95% of children did indeed cry when administered the standardised smack. I might decide that this was a sufficiently strong effect to justify the intervention.

But even a high figure like 95% raises difficulties; the first one concerns certainty. As previously mentioned, there is a small but possible chance that some of the crying was not in fact caused directly by the smack, but by any one of a multitude of other, unknown factors. The timing of the crying might be entirely coincidental, or it might be indirectly causal, for example if the trauma or anxiety of the moment caused other emotional upwelling to result in crying where none would have resulted from the pain of the smack alone. We might also wonder whether the 5% of children who did not appear to cry did in fact go to their rooms late that night and sob their hearts out, unseen by anyone – it was just that our effect-measuring time frame was wrong.

In fact, that 5% creates all sorts of problems: in a class of twenty (if only…) that would mean that one child did not cry when smacked. This is enough to disprove the claim that ‘smacking (always) causes crying’. We then need to decide why this child did not cry and whether this was sufficient to render the entire intervention/theory incorrect. The reasons why the one child did not cry could again be numerous, ranging from a higher pain threshold, to defiance, to the fact that (s)he was accustomed to being smacked at home, an acceptance that the smack was warranted, a stiff upper lip – or the exercise book inside his/her trousers . I might be able to discover these things, but there again, I might not – and quantifying them so as to factor them into my research could be exceptionally difficult.

Even if I decided that I couldn’t do this, but that 95% was good enough, I would then be faced with anticipating the future effect of the intervention; even if the 95% figure proved accurate, there is no way of knowing whether, on the next smacking event, it would be the same individual who did not cry, or a different one: there are too many unknown factors ever to be certain. Indeed, the 95% might also change, perhaps because that one individual experienced peer pressure to conform, or because his/her defiance had galvanised a few others – or perhaps simply because they were becoming habituated to the treatment. In fact, we have already manufactured a false dichotomy by framing the responses as ‘cried/didn’t cry’ – when there are numerous other possible consequences and effects of that smack.

We are also left with the dilemma of what to do with the 5% who don’t cry. Should we administer a sharper version of the same treatment – or will this be counter-productive? Or should we take a different approach entirely; if so, then what – and is it practicable to do so? How do we know? What if that 5% does turn out to be a different individual at each iteration? What should we do then? Those cumulative five percents start eating into our effect size, and it might suggest that 95% certainty is too high. Or are the reasons why our intervention is not working so completely out of our control that nothing we can do will work (whether we know it or not)?

Then there is the whole problem of whether we have accurately identified a suitable and desirable outcome for the proposed intervention – which in my example above, we very probably have not. What’s more, we need to know that we can implement the same intervention again perfectly – or at least with only controllable variations – each time we use it. So is it possible to maintain absolutely the right intensity of smack, regardless of any other factors that might affect that? Can we be sure that circumstantial factors will not influence the effect in ways that render it ineffective?  (Maybe children cry more willingly at Christmas, or on their birthdays, or if they are second children, or if they have emotional parents? Maybe the smacks very according to the smacker’s mood, tiredness or liking of the child). And can we be sure that the same reactions will be observed in an entirely different group of children, perhaps with different backgrounds or of different ages?

 

In the past few weeks, several people have said to me that they would rather rely on the results of theory and research rather than ‘hunches’ – or as I prefer to think of it, Masterful Experience. In one case, very few reasons were given for this, and my reaction was that this sounded pretty much like a hunch in its own right, a case of someone being blinded by apparent science without really feeling the need to ask too many questions.

A second suggested that individual experience is often misleading and subject to confirmation bias. This may not be wrong, though one might have hoped that a genuinely reflective, experienced professional would at least attempt to identify and allow for such things – so it is something of a hunch in its own right to assume that they rarely or never do.

I am not suggesting that careful research can never shed light on some things (though just how careful careful needs to be was shown by the potential flaw in the research quoted even by Dylan Wiliam in this exchange here (follow the comments)  – and this in a case where research might just have shown up something that was counter-intuitive). But if we take my rather absurd example above, the implications of properly researching even something as simple as a fairly-mechanical connection between smacking and crying are so huge that it is all but impossible to factor in everything that needs to be known, let alone quantify it accurately. The above example was chosen because of the relatively direct, immediate effects of the ‘intervention’ – so how much harder it must be when the whole process is cognitive-intellectual in nature – and when it is not even easy to agree on the existence or desirability of a given effect.

Research could no doubt tell us that the majority of children do indeed cry when smacked – but it cannot offer us complete certainty without over-stepping its confidence levels, or tell us why, or give us much guidance what to do with the exceptions. It is, in fact reduced to the role of generalisation –which is nothing that wide anecdotal experience of human behaviour largely could not tell us anyway. And even a high correlation neither guarantees the rule, nor tells us anything specific enough to know what to do with a specific individual at a specific moment. What if the one child standing in front of us right now is indeed the 5%? We will simply never know until after the event – and that is enough to make it impossible for us to act with any certainty; we are left with nothing more than our best guess.

Maybe one day I will be proven wrong – or maybe I already have been. By training and inclination I am neither a research scientist nor advanced statistician, and perhaps the foregoing does nothing so much as reveal my woeful ignorance. I apologise to any of that inclination who are currently tearing at their hair on my account. But even if I am wrong, what are we to make of the ethical implications of such powerful and complete mind-control? Do we really want to arrive at such a situation?

All I am offering is the honest attempt of a relative lay-person to scrutinise whether all that is presently being claimed for ‘research’ warrants further trust being put in it. We are so often required to take much of what we are told on trust – and too often this has been shown to be either unworkable or scientifically flawed. The foregoing represents, in my opinion a conscientious attempt to reason through the claims being made by some for research. I cannot yet see that the objections have been addressed – and so, would it not be professionally irresponsible of me to act on such a hunch?

And I won’t even start to speculate what would happen if smacking were shown to have a large effect size on children’s learning…

________________________________________________________________________________

No children were harmed in the making of this blog post.

Global warming – bring it on!

Despite the lovely weather, I’ve been spending much of the last few days marking G.C.S.E. coursework, to get it out of the way before my wife also finishes for Easter, today. And yes, sadly I have been spending a fair bit of time scanning the blogs too. At least I can claim a greater purpose for that on this occasion, but more of that another time.

Having read few blogs from teachers of Maths, English and other important subjects, I was quietly congratulating myself on teaching a relatively straightforward one like Geography. Compared with the abstraction of the various theories of how to teach children to read and so forth, Geography really seems quite unencumbered and plain sailing. After all, despite the sociology-type projects, it’s still largely about knowledge – and of something relatively tangible at that: the world about is. Unlike say, a mathematician or linguist, I can easily stick a photo of something up on the screen at the start of the lesson and in effect say, “Understand that!”

Though contentious, Robert Plomin’s work on heritability seems to confirm this experience. His findings suggested that subjects such as the humanities have a lower heritable component than those that depend on more abstract principles such as the rules of grammar. It certainly seems plausible to me that tangible, real world phenomena – of which children might conceivably have some prior experience – give us a bit of a head start. Perhaps even over history, whose main vector – time – is equally abstract in some ways. I suppose the use of artefacts and source documents is one way round this. It also seems true that it is the abstractions, even in Geography, that children struggle with; that’s not really surprising.

I’ve also just finished reading Daisy Christodoulou’s book Seven Myths About Education, which I found to be an impressive justification of the call to teach knowledge, and which has had me thinking about the place of Fact in teaching.

So it would seem I have life relatively easy. But then it came to marking the coursework. Discretion prevents me saying anything too specific, but suffice it to say that the provisional marks fully support my knowledge of the pupils concerned over the past two years, in almost every case.

Thanks to the wonders of the options system, I have been teaching a ‘set’ whose performance regularly covered every grade from A* to U, all in the same lessons. This was consistent with my impression of these children based on years of prior experience of teaching exam classes. We all know that the advised method of dealing with this situation is to differentiate like mad, but this presented me with a problem that I’ll elaborate on in a moment.  However, the situation was compounded by the fact that those same pupils had minimum target grades, with one exception, lying in the range A* to C – despite the fact that there is more change of the rocks melting in the sun than some of those targets being met.

Now what is one to do in such a situation? For a start, how does one differentiate facts? I know that facts, knowledge and understanding aren’t the same thing, but the point still stands, Geography depends on knowing (and applying) a lot of factual information. Either you know it or you don’t – for all that you can be selective over which facts, or how many you introduce. It’s not like skills that you can do to varying degrees of competence. In the end, getting an A* grade depends on knowing a lot of stuff.

When it came to fieldwork, on which the controlled assessment depended, then we all had a look at pretty much the same coastal features and either understood them or not. When it came to write-up time, I could only stipulate the same procedure for all – as after all, that is the one they all had to follow. Surprise, surprise, those who followed it closely did indeed end up with high grades, while those who didn’t – or couldn’t – did not. Obviously, in controlled assessments, the scope for  teacher intervention is NIL. And rightly so – this is, after all the logical conclusion of my other exercise when I sprang brain-only assessments unannounced on lower school pupils. An exam should indeed be a test solely of what the individual can do unassisted.

So I now have the prospect of a class, a significant number of whom have not hit their minimum target for the coursework component, and now have an uphill task ahead during the written exams, if they are to meet their targets (and, of course, mine…). Do I tell them the grades or not? This despite the fact that virtually all have accounted for themselves as I expected from knowing them for two years or more.  Just who is the better judge of these individuals – me, or a number-crunching machine? It strikes me that this could be an entirely manufactured problem and one that if I don’t manage it carefully could discourage more pupils than it does the opposite.

There’s a further twist: when one is teaching a class of this sort, an issue that normally remains in the background jumps out to ambush you. With some pupils in the class very capable of getting A* grades, it would be deeply irresponsible not to provide them with the teaching they need to reach them. Despite differentiation (which as I said is not so easy when there’s a specified body of fact to cover), one inevitably ends up teaching ‘high’. In fact, given the target grades of the others, that is precisely the right strategy, as in theory, they aren’t so far behind in any case. This is what I did, even though the material was more complex than some of the pupils could sometimes handle. (Yes, the intervention did kick in at that point). If I ‘taught down’, not only did I risk those A* pupils, but I ended up teaching at a lower level than the targets said the other pupils could achieve too. What to do?

As I mentioned before, pupils often report that understanding geographical information during a lesson is not a problem – there may be a lot of it, but much is fairly straightforward, until you get into the underpinning theoretical principles (which we do need to do). Even many of the less able pupils proved able enough to complete decent class work, the main impediment for some indeed being their written skills. But there is also no getting away from that. What seems more of a problem is long-term retention, despite the usual testing regime, and when the immediate feedback from the student is positive, it makes judging the success of any strategy all the more difficult. This seems to be a fundamental cognitive problem, and one that maybe the teacher has limited ability to remedy. I think we have to accept that some students are never going to achieve high grades – and wishfully inflating their targets isn’t going to change that fact.

I’m still trying to formulate a specific conclusion to this dilemma, so this post will end a little open-ended. One thing I am not questioning is the value of teaching knowledge; Ms.  Christodoulou’s book has served to reinforce my prior belief that only through knowledge can understanding and skills develop. (Or is that confirmation bias again?)

The niggling doubt in my mind at present more concerns the pernicious effects that learning targets can have on teaching  – all the more so when they are of dubious accuracy or relevance. One thing it strikes me we should not do with targets is tell those to whom they apply, what they are.

Or maybe the only answer is more global warming, then perhaps those rocks will melt after all.

Mere anecdote?

I must admit, I find the idea of Edu-blog of the Year rather depressing. It’s a (vaguely) free world of course, but for me, blogging is not about reinforcing one’s credentials – let alone those of the educational establishment – but simply a chance to air and share a few thoughts on my years of experience. Isn’t that enough?

I think the most second-most dispiriting aspect is the degree to which the debate in the edu-blogosphere revolves around ‘The Answer’ to education, people arguing the toss over the ideas of this or that educationalist, management strategy or political policy. Even the redoubtable Andrew Old gets himself embroiled in debates on the veracity or otherwise of theoretical claims, as his latest post shows. This has of course been going on for decades, well before the birth of blogs, but I think it has got worse the more accountable education has supposedly become. The ever-more frantic clutching at the straws of so-called educational ‘success’ (which normally just means good exam results) seems to be reinforcing the belief that there is, somewhere, a Holy Grail. It borders on the obsessive and one might hope that educated individuals would realise just how unlikely this actually is.

The most dispiriting thing of all is the bloggers who earnestly cite their personal interests as variants on “reading and writing about education”. Dear me! Please get this in proportion! I know that education is important (and I know that I spend good time blogging too) but this most definitely is not my spare-time interest, more an extension of professional development. What kind of a life-example would this be setting to young people, if they could see?

Anyway, now I have that off my chest, here’s why I think a lot of it is more a symptom of O.H.T. syndrome (obsessive hobby-teacher) than anything more significant.

Last week, I delivered a CPD session that was, for the second year running very well-received by the (admittedly small) audience. The title was “Systems – and why they let us down”. The root of the session was precisely the one that underpins this blog:

There is no point in looking for The Answer.

There simply isn’t one. Or to be more precise, the answer is so complex that we have no chance whatsoever with our puny human minds in understanding it in any useful way.

This is not as defeatist as it may seem. Cause-and-Effect no doubt exists in social constructs such as education, as it does in other aspects of Existence. But as Duncan Watts wrote in Everything is Obvious (when you know the answer), human behavioural systems are simply so complex as to be unfathomable. He should know, having taken a physics and engineering doctorate before switching to social sciences – which he admits he found infinitely more complicated.

I began my CPD with a clip from this video:

The thing to note here is that the seemingly-repeating cycle of bottles passing through the machinery is actually nothing of the sort – each pass represents a unique new event, albeit cosmetically identical to the one that went before it. Humans have learned to control cycles that involve the mechanical processing of inanimate items very closely (though they still break down – just look at photocopiers). The mistake we make, however, is to think we can treat sentient beings in the same way. If you were to put say, cats on that production line, the consequences would be catastrophic… Animate objects simply do not behave to order in the same way as inanimate ones. That isn’t to say there is no causality behind their (re)actions, but simply that it is too complex ever to know in a useable sense. The same cat might react differently on two consecutive passes thought the machine, and the reasons why are too many accurately ever to know.

The same is true of humans. Even in the average classroom, we have between 25 and 30 individuals. While appearing superficially knowable from the outward behaviours, the processes going on in those brains at any one time are again, simply too complicated really to know; they also don’t loop over time. Give them an identical stimulus on a Monday morning and a Friday afternoon and there is certainly no guarantee that the response will be the same, for all that certain crude patterns might be identifiable. Even in one lesson, the permutations of motives, intentions and preferences, let alone the interactions between them, are simply too many to count, let alone explain or predict. Admittedly, one might eventually start to identify crude patterns (a.k.a. getting to know your pupils) but even that is hardly a reliable predictor of what will make them learn on any particular day.

This is why it is impossible to derive accurate paradigms to explain what works in education. I struggle to think of even one from my three decades in education that has materially advanced the process of education more than slightly, in a way that has not later been discredited by people claiming the opposite. The supposed lack of rigour that so many people bust their guts trying to overcome is inherent in this activity.

For a start, there are too many variables we don’t normally consider – here is a list taken from reading for my CPD session:

  • Framing dilemma (the response depends on the parameters with which the action is defined)
  • Historical fallacy (the past is not a good predictor of the future, as time doesn’t loop)
  • Reliance on ‘common sense’ (which may be neither common nor sense) to predict behaviours
  • The general difficulty of predicting the behaviour of others
  • Difficulty of identifying the motives of others
  • Default assumptions that reinforce undesired behaviours
  • The choking effect of targets and rewards on motivation
  • The weakness of  remote decision-making

Andrew Old is right to level the greatest criticism at those who claim the greatest rigour for their findings, and who should know much better – but many of us lesser souls are just as guilty of under-estimating the complexity of that in which we are seeking simplistic order. We do it even in a task such as devising a lesson plan – and we overcome it in the way people do all of the time – heuristically. Teaching a lesson is a bit like driving home afterwards: you know to tool you are going to use (which you assume, being inanimate, will work), you know your destination and you have an idea of the route. However, the actual act of travelling can at any moment be subverted by any one of many factors that are almost impossible to predict accurately, ranging from a breakdown to bad weather, to unexpected queues to simple crazy driving. In each case the only way making progress is to respond to each event iteratively, in real time. Teaching is pretty much the same. It’s more like playing chess than doing Sudoku.

This is why so much educational theory and research is simply a waste of time. It’s not that we shouldn’t seek to understand processes that might improve teaching, but simply that many people seem to be looking in completely the wrong place.

We need not fear anecdote – but let’s rename it ‘Experience’. A teaching leader of my former acquaintance was adept at dismissing anything he disagreed with as “mere anecdote” – which he then went on (in his own eyes) to trump with his own ideas, most of which were also drawn from – yes – anecdote. The reason is this: successful teaching is mostly founded on anecdote – the lessons learned from real experiences that went before. Indeed, the truly-valid corpus of collective knowledge in teaching is also based on accumulated anecdote. That many people attempt to pass it off as more than that does not change the fact. And it need not be otherwise: in the field of human interaction, prior experience (a.k.a. anecdote) is our best guide, a perfectly legitimate methodology – not so that we can mindlessly ape what we did before, but so that our understanding and judgement can be slowly refined. This in turn will better-attune our instincts in future situations.

What’s more, if we are trying to establish education as a science in order to legitimise its professional status, it is worth bearing in mind that this is also a false idol. What defines other professions is not really their rigorous scientific frameworks, but the iterative and artful skill of the individuals who operate within them, be they barristers mounting a defence or doctors diagnosing a disease (medicine notoriously remains as much an art as a science).

Anecdote is also the best torpedo for the latest annoyingly-fashionable idea. This is not wantonly destructive, but simply the realisation that the nature of social reality is so complicated that all one has to do is select from an alternative, apparently contradictory (but equally valid) experience in order to sink it, using different objectives or priorities. Fundamentally who knows which is correct? That applies universally, so it is not a partisan point.

All this wouldn’t be so serious if it wasn’t actually heavily influencing how people teach, and what they expect it to achieve. From what I can see, virtually all the ‘research’ – and much of the theory – starts from one or other implicit value-judgements about what education is and what it should seek to achieve. They build on this by imposing partisan and subjective views of what, for instance, constitutes ‘success’. But they rarely define them, and all one has to do to demolish them is to disagree with the initial premise (on whatever grounds, well-based or otherwise); there really is no simple universal objectivity about education there to start from.

Worryingly, such a theory-heavy approach seems increasingly to be dictating how teachers respond to their pupils, and indeed how they plan and deliver their lessons. I will repeat: the reason this doesn’t work is that people are too complicated to systematise. Consequently it may even be that this over-reliance on dogma is actually hampering our abilities to respond closely to our pupils’ real moment-by-moment needs: it is no good fitting the person to the theory – it doesn’t work.

This may again sound like a very defeatist viewpoint; indeed I have even been accused of educational anarchism in the past, but nothing could be further from the truth. The fact that we can’t understand or predict these phenomena doesn’t mean we can’t deal with them. In fact, we would do a better job without the all the ideology. We would be left with the simple need to respond to each and every moment as it arises, in whatever seems the best way possible at the time. That should keep us busy for now. Experience and empathy would be critical, as would a degree of opportunism, honed with experience. The real skill would be in reading the currents and learning to capitalise on the helpful ones.

The irony is, I suspect this is actually what happens the majority of the time in classrooms anyway. It’s just that the theorists (and increasingly the profession) can’t see it. Or they feel that in some way such an apparently unstructured approach isn’t professional enough. But good teaching is still more of an art than a science, rightly so for an activity whose major domain is the realm of the psyche, and thank goodness for that – any general scientific law would remove much of what’s rewarding about it.

All that you really need to be a successful teacher is a liking for young people, the ability to empathise and communicate with them, and the knowledge of what you’re covering and where you’re heading. Any abstracts we might need would be much more helpful for being descriptive rather than trying to be prescriptive. Anything else is pseudo-scientific hogwash.

That is why my blog lays claim to be nothing more than the gathered reflections of a reasonably experienced teacher. In my dreams, people might find a little to use in them. If anything it is anti-theoretical: laying down any kind of Law (as opposed to anecdotal guidelines) is doomed to failure. As with driving home, the best we might reasonably expect from theory is a rough idea of the tools to use and the route you might consider taking. You might learn that it’s better to change route than sit in a jam – and never to lose sight of your destination (which is, of course different for each of us). More than that and it simply becomes an unhelpful attempt to second-guess the unknowable.

And it’s also why all those Saturday morning edu-bloggers busy looking for the General Law of Successful Education really would, in my humble opinion, be better off spending their time doing something else. Like learning to surf the human-present, in the company of their partners, families or friends – and taking the lessons of real human interaction into the classroom.

I smell a rat.

To control rat infestation, French colonial rulers in Hanoi in the nineteenth century passed a law: for every dead rat handed in to the authorities, the catcher would receive a reward. End result: more rats.

At a fee-paying school in Israel, teachers were exasperated with pupils being collected late by their parents. So they introduced a charge that was added to the school fees for every late collection. End result: late collections increased.

John Tomsett, a head teacher in York blogged recently that the instigation of formal intervention tactics with exam classes resulted in a decline in pass rates. The removal of the intervention had the opposite effect.

In the first case, people bred rats specifically to hand in, in order to claim the reward – and presumably some escaped. In the second, parents now felt they were justified in turning up late as they were paying to do so. In the third, Tomsett discovered that the pupils were becoming more and more reliant on teachers to do everything for them – what he calls ‘Learned Helplessness’ and were consequently doing less revision themselves.

I am currently reading “The Art of Thinking Clearly” by Rolf Dobelli – an excellent study of cognitive biases and other ways in which reality double-crosses us. Apparently the first-named example, which comes from the book, is called the Incentive Super-response Theory. There are many others which provide much food for thought. So far, I have been particularly struck by those entries that relate to the unsuccessful use of targets – but then so would I  be:  Confirmation Bias is another double-cross!

As the saying goes, “Be careful what you wish for” – and be even more careful what effects the result of your latest ‘initiative’ will actually have.