Hamlet consists of 32,241 words and lasts precisely four hours and 48 minutes on stage.
The Mona Lisa consists of precisely 4081 square centimetres of canvas and is 508 years old (depending on when you accept it was finished).
The above clearly shows that both items mentioned are Great Things.
Well, I think I’ll admit it that I haven’t actually made those measurements myself, but that is rather beside the point. They are nonetheless measurable attributes of the great works of art concerned, and it would in theory be possible to go and verify them. I just don’t have the time right now.
And I have yet to meet anyone who actually thinks that those works’ greatness actually lies in the number of words, their performance length, canvas size or age.
There are many kinds of knowledge, and they all have their uses, if applied wisely. In the sciences, we prefer to look for material properties and direct causality, where possible supported by quantitative evidence; that, we believe is where ‘truth’ lies. But you cannot employ the same approach with a creative endeavour like a painting or a play or a piece of music. It is certainly possible to take measurements of these things, as shown above – but in no way does this capture what is really important about those works, the qualities for which we value them. In fact it may have the opposite effect. When considering this issue, I am always reminded of this clip, the first part of which sums up the matter admirably:
In recent days, there seems to have been a surge of discussion about evidence in education, and it is beginning to spook me somewhat. I do have some sympathy for the source of this development; after all there have been several books published recently purporting to show what happens when unevidenced practices become the norm in teaching. This is where the whole of the constructivist-progressive agenda came from, and I am not in any way opposed to the backlash that seems to be gathering pace against that. All my experience suggests that many such criticisms are right.
The reason I am concerned is that I think there is a risk that ‘evidence’ will simply become the next Doctrine to sweep education like most fads seem to. In which case it presents some serious threats:
The first one is that it will effectively gag anyone who does not have access to ‘evidence’ of an acceptable type – and where that means research evidence, then that will be a great number of thoughtful, experienced people. There already seem to be a few people starting to take that line and it may lead to…
…the second, which is that it will, as a result, lead to yet another autocracy based on an arbitrary definition of what ‘acceptable’ evidence is.
And the third is that despite its (I hope) honest quest for reason, it will prove to be equally ill-founded as all those previous orthodoxies. The reason for this lies at the top of the page: there remains an irresolvable problem with the nature of ‘evidence’, namely that it only works if 1) you know (and have agreed) what you are looking for, against which to test efficacy and 2) you have chosen information to use that is appropriate for the nature of the subject-matter or discipline you are considering.
I will deal particularly with these last two points, which seem to me to be the critical ones.
Evidence is used to support judgments about the efficacy or otherwise of a particular course of action. In order to judge this, you need to know what the desired outcomes are. Unless you are willing to go down the reductivist route of considering educational outcomes to be simply exam grades, this presents the age-old problem of trying to define what education is and what it is ‘for’. In my view there simply can be no single or simple answer to this. The definition that works best for me is ‘giving context to lives’ – and this is so broad and ill-defined as to make benchmarking against it meaningless. Yet to adopt anything more prescribed is to narrow the scope of what we do.
The other part of this problem is time frame. In order to judge ‘success’ you need to have a cut-off point in mind; the only cut-off point for the effect of education in people’s lives is death – after which is arguably doesn’t matter much anyway. So you can never say that something ‘worked’ or not until that person’s life is at an end. For example, it took me thirty years fully to retrieve and understand the precepts of economics – which I really didn’t as a sixth former – so it might be argued that all this time later, that teaching actually ‘worked’ after all. It is reasonable to assume that evidence for that would never come to light.
It is also worth observing at this point that in many cases evidence for success depends on knowing the outcomes of opportunity-cost situations – and it simply isn’t ethical to use groups of pupils as control experiments to see what happens to them when you deprive them of education, for instance. So this makes it all the harder – to the point of impossibility – to know what ‘worked’ and what didn’t, no matter what evidence we might have gathered. And of course we can never know what might have worked even better, because unlike in pure science, it is not even moderately reasonable to assume that what happened last time will happen next time.
My second key point concerns the appropriateness of the evidence and supporting data. Teaching simply does not work like a pure science, partly for the reasons just discussed; there is also the small matter of the huge causal density of human behaviour that makes it all but impossible to identify what causes what with any confidence. I’m quite glad about that – I don’t want to live amongst a race of robots. This is why hard ‘evidence’ for educational matters simply can never be achieved – even if it can be obtained, just like Hamlet and the Mona Lisa, it gives us little insight into the true value of those phenomena, which (just like all creative endeavours) exist almost totally in the realm of unique and irreducible human interactions and experiences. For all its limitations, only qualitative understanding can do that.
Consider the following extract from my current reading book, Flow by Mihaly Csikszentmihalyi:
“Adolescents who never learnt to control their consciousness grow up to be adults without a discipline. They lack the complex skills that will help them survive in a competitive information-intensive environment. And what is more, they never learn how to enjoy living. They do not find that habit of finding challenges that bring out hidden potentials for growth.”
Just how much confidence can we have in this statement? There is no factual data supporting it, and neither very easily could there be, given the nature of the conclusions being drawn. We simply can never gather enough data to have any confidence in its infallibility. Yet I suspect that this quotation will nonetheless resonate with many teachers, and it is probably a fair reflection of reality. It derives purely from considered and exchanged experiences from people who may well have been sampling adolescent lives for considerable periods – but qualitatively rather than with numbers. It still remains a helpful piece of information, if used in the correct way – by which in part I mean remembering that we should not overdraw its conclusions. Given the diversity of life, we cannot say more than this – but that does not diminish its usefulness. In fact, this is far more useful to teachers than spurious data that really cannot tell us much of use because they are an inappropriate type of information for this field.
Much of the concept of ‘evidence-based practice’ comes from medicine, and there are many in education who would like to ape its approach. But:
- Pain cannot be measured except by a patient’s subjective relating.
- Neither can changes in it.
- Different patients may opt for different treatments for the same aliment depending on their personal circumstances and preferences.
- The quality of care provided by medics can only be assessed qualitatively.
- Even drug confidence levels are only 95% – they don’t work for all people, all the time, in the same way.
- The placebo effect can distort results, and is entirely qualitative.
Much medical practice is a matter of judgement – there are plenty of people killed by afflictions that do not present with any ‘evidence’ at all – and others who have precious bits of their anatomy lopped off on the grounds that evidence suggested something dire that later proved not to be the case. And there are cases where a medic’s intuition proves to be the critical factor in diagnosis.
If we are worried about comparisons with medicine, we would do well to remember that it is not only medics’ technical expertise that defines their profession: in some ways it is much more about the qualitative decisions and judgments they make – and that is in many ways a far more skilled task. What’s more, even in the much more precise field of medicine, they don’t always get it right, either.
Evidence-based approaches may have some use in moving teaching forward – but we need to be extremely careful about what we admit as evidence. We also need to be very careful indeed when it comes to over-reliance on statistics. As Old Andrew wrote elsewhere, even a single example of something is enough to demolish a sweeping generalisation.
And in that sense, individual experience is valuable.