Sunday, 27 September 2015

The School Research Lead: Are you a positive deviant?

Are you - the School Research Lead - a positive deviant?  If not, you need to become a positive deviant yourself or at least help others to become one themselves.  The  underlying premise behind positive deviance is that in every organisations there are certain individuals whose uncommon practices/behaviours enable them to find better solutions to problems than their colleagues who have access to the same level of  resources.  So in schools, there are certain teachers whose uncommon teaching practices/behaviours enable them to find better solutions to problems than their colleagues who work in the same school and have access to the same resources.  So school research leads will need to develop evidence-informed behaviours and practices, which bring about improved pupil outcomes in comparison to other colleagues within the school.

In this post we will look at the work of Atul Gawande and his 2007 book - Better : A surgeon's notes on performance - and his recommendations on how to be a positive deviant.  Gawande is the author of a number of best-selling books, with his most recent being the The Checklist Manifesto and Being Mortal - Illness, Medicine, and What Matters in the End, and  is a surgeon in a Boston hospital, a Professor at the Harvard Medical School and a writer for the New Yorker.

So what does it look like to be a positive deviant?

Gawande has five suggestions for becoming a positive deviant?

1. Ask an unscripted question

On a day to day basis teaching involves hundreds of individual interactions with both pupils and colleagues.  Some of these interactions may be scripted, through the use of lesson plans, schemes of work or carefully planned meeting agendas.  With this in mind, one would hope that there would be plenty of time to ask unscripted questions to find out what's really going on for our pupils and colleagues.  Indeed, finding out what's really going for our pupils is an essential part of a number of processes - spirals of inquiry and strategic inquiry - and which should form part of the school research lead's repertoire

2. Don't complain

Throughout my own career in education, there were plenty of things I complained about.  I'm sure that many of you will agree with the me, that when teachers get together to talk about the profession - the tendency is to talk about what is not working, rather than what is.  However, as Gawande states : Resist it. It's boring, it doesn't solve anything, and it will get you down.  You don't have to be sunny about everything. Just be prepared with something else to discuss: an idea you read about, an interesting problem you cam across .... just keep the conversation going.

3. Write something

Gawande states : It makes no difference whether you write five paragraphs for a blog, a paper for a professional journal, or a poem for a reading group.  Just write.  Gawande argues that you should never underestimate the impact writing has both yourself and your world.  Gawande found thorough his own writing that he maintained his sense of purpose as medical practitioner, which may otherwise have been lost through the grind of day to day work.  He also argues by writing you make yourself part of a larger world.  Indeed, in the context of teaching and the emerging College of Teaching this may never have been so important.  From my own personal experience I would add - spend some time learning how to write better (and I know my own writing can be much improved) and read books such as On Writing Well by William Zinsser.

4. Change

Gawande argues that individual respond to change in one of three ways.  You have early adopters, if you have later adopters and there are some who are permanent sceptics and never adopt new ideas at all.  Gawande argues that you should make yourself an early adopter, although he is not arguing you should adopt every new fangled idea.  That said, you should be willing to reflect on your practice, identify what could be working better and find new ways of working.  As Gawande says : As successful as medicine  is, it remains replete with uncertainties and failure.  This is what make it human, at times painful, and also worthwhile. (p257) For me the same could be so easily said of education.

5. Count something

Count the number of students who find a particularly topic difficult.  Count the number of students who are having difficulty meeting homework deadlines.  Count the number of Y7 pupils who have had difficulties in English and maths, and which feeder school  they come from.  As Gawande states : If you count something interesting, you will learn something interesting (Gawande, 2007 p 255) So ignore David Didau's advice to resist, with all your might, the temptation to slap numbers on to your idea in an attempt to justify why it’s good; this is cargo cult science.

But education is not medicine, I hear you say.

And I agree,  education is not medicine, but that doesn't mean we cannot learn from medical practitioners and how they go about their work.  As is often said: a teacher from late 19th century would recognise today's classrooms, but would a 19th century surgeon recognise today's operating theatres. In other words, over the last 100 years which of the two professions has made the most progress.

Some final words

I'm going to end this post with Gawande's own closing words, to be a school research lead or evidence-informed teacher who is a positive deviant ...  find something new to try, something to change.  Count how often you succeed and how often you fail. Write about it.  Ask people what they think.  See if you can keep the conversation going (p257).

Monday, 21 September 2015

The School Research Lead - The Evidence Says Its A Good Idea - Should You Do It?

So you've read the research papers and separated the wheat from the chaff.  You've identified the research papers where it's just old ideas recycled with new terminology. Research papers where there are real doubts about the claims that are being made. Research papers where there just might be onto something but the evidence is incomplete and lacking.  This leaves research papers with changes which might just work, but you are not quite sure whether to adopt it or not.  Drawing once again upon the Daniel Willingham's 2012 book - When Can You Trust the Experts? How to tell good science from bad education - we will look at what can be done to help make a good and wise decision on whether to proceed with the change.  In doing so, we will consider the following:
  • the factors to take into account of even if there is good evidence that the change is scientifically sound.
  • a check-list to be completed before adopting a change 
  • Willingham on education and evidence-based medicine.
Factors to take into account even if there is good evidence that the change is scientifically sound

Having read the research papers, you think you've come across an intervention/change for which there is sound evidence base and which you think might be worth adopting.  However, just because the evidence-base appears to be sound, that in itself is not a good enough reason to adopt the intervention/change.  There are a range of other factors which need to be taken into account before deciding whether or not to proceed.  Willingham identifies four factors which are worthy of consideration.
  • Implementing a Change likely incurs a cost in times, energy and other resources.  Even if you believe that promised benefits will accrue you must weight them against the anticipated costs.
  • Any Change you adopt will brings opportunity costs
  • A Change may work as described but may have negative side effects
  • Could the Change directly impact upon others -
A check-list to be completed before adopting a change

Willingham, drawing upon the work of Atul Gawande,  has developed a 10 point check-list which is to be completed before adopting a change.
  1. The thing I'm hoping to change is ....
  2. The way I can see that things change (in other words, what I'm going to measure) is ...
  3. I've measured it before I start the Change , and the level it ....
  4. I'm also going to measure ....... It probably won't be held by the Change, but you never know.
  5. The Change could have some negative effects.  I'm most suspicious that it might influence ..... To be confident about whether or not it does, I'm going to measure ......
  6. Here's how often I plan to collect the measurements, and the circumstances under which I'll do so: ......
  7. My plan to keep these data organised is ....
  8. The date by which I expect to see some benefit of the Change is ....
  9. The size of the benefit I expect to see on that date ....
  10. If I don't observe the expected benefit on that date, my plan is to ...... (Willingham, p 217)
At first glance, this may appear to be a daunting check-list.  On the other hand, as Woody Allen is often misquoted as saying -Eighty percent of success is showing up,  so by just asking these questions will get you most of the way towards success.

Willingham on education and evidence-based medicine

In writing a blog-post or any other publication it's quite easy to 'cherry-pick' and only quote 'authorities' when they agree with you.   In a number of blog posts I have taken the stance that education has much to learn from evidence-based medicine so it's important to note Willingham's reservations about making undue comparisons between education and medicine.  First, Willingham argues that in medicine there is a single goal shared by both doctors and patients i.e good health, whereas in education there are a far greater and diverse range of goals, aims and objectives.  Second, the deterministic model of science is relevant to medicine, well-evidence treatments for patients will probably work, whereas in education well-evidenced interventions may possibly work.  Willingham goes onto argue that architecture serves as a better comparison than medicine.  Architects have to manage multiple competing needs - be it functionality, form, the environment.  Furthermore, their decisions are informed by the 'science' of building, though not determined, in that they set the overall conditions for any design decisions.  As such, educational research should set the boundary decisions by which teachers make decisions about teaching and learning.   My own views on this matter, for what it is worth, is that whilst accepting the limitations in applying a strict interpretation of evidence-based medicine within education, there is a still a huge amount to be learnt from doctors, nurses and other health practitioners about how to make best use of the available evidence. For me, applying the processes associated with evidence-based medicine are likely to lead to you being a more  effective evidence-informed educational practitioner.

Some final words on this series of posts

This is the last of five posts which have been inspired by When Can You Trust The Experts: How to tell good science from bad in education.  As Daniel Willingham himself argues - the 4 steps outlined - will not make you into an expert in any particular area, as the 4 steps themselves are themselves a heuristic, a short-cut or work-around to help you get round that lack of expertise.  However, just because gaining expertise is a difficult and time consuming process, does not mean that you cannot be informed about the changes and interventions that have a reasonable chance of working for your students, and identifying those changes which are just BS and are not worthy of you and your pupils.

Sunday, 13 September 2015

The School Research Lead - Getting Better at Analysing What the Experts Write and Say

In previous posts I've written about the need to 'strip, flip and trace' when trying to determine good educational science from the bad. In this post, and once again drawing upon the Daniel Willingham's 2012 book - When Can You Trust The Evidence: How to tell good science from bad in education - I will look at a number of steps which evidence-informed teachers and school research leads can take to effectively analyse educational research. The rest of this post will be in three sections:  
  • what is the role of experience in analysing research; 
  • Willingham's nine steps in analysing research; 
  • the notion of practical significance.

The role of experience

The true test of friendship is not when you agree with someone, it's when you disagree them. I've got a huge amount of time and respect for Tom Bennett and for his work (along with Helen Galdin O'Shea) in developing the researchED movement. Unfortunately, in the following quote from his 2013 book - Teacher Proof - I think Tom gets it wrong.

… there are few things that educational science has brought to the classroom that could not already have been discerned by a competent teacher intent on teaching well after a few years of practice. If that sounds like a sad indictment of educational research, it is. I am astounded by the amount of research I come across that is either (a) demonstrably untrue or (b) patently obvious ... Here’s what I believe; this informs everything I have learned in teaching after a decade: Experience trumps theory every time. (Bennett 2013, 57-59)

Willingham argues that informal knowledge can mislead us in two ways; first, when we think with certainty; second, when we misremember or misinterpret past experience. As Willingham explains

... 'I know what happens in this sort of situation.' I think to myself, 'My daughter loves playing on the computer. She'll think reading program is great!' I might be right about my experiences - my daughter loves the computer - but that experiences happened to have been unusual; perhaps she loved the two programmes that she used, but further expertise will reveal that the she doesn't love to fool around with other programs. Another reason my experience might lead me astray is that I misremember or misinterpret my past experience, possibly due to confirmation bias. Perhaps it's not that my daughter loves playing on the computers; actually, I'm the one who loves playing on the computer. So I interpret her occasional, reluctant forays onto the Internet as enthusiasm. (p186).

So if teachers cannot relay on their experience to provide guidance on how to proceed then what are we to do?  Willingham helpfully identifies fours steps which can be taken to help manage our experiences;

  • recognise that experience can be both fallible but it can also be insightful;
  • check-out your experience with others. How does it relate to their experience or interpretations;
  • think of the opposite to what your experience tells you. In other words, if you think of explanation or possible outcome, try and think of the exact opposite and see whether that is reasonable;
  • actively look for daily examples of events which do not confirm past experience.

Willingham's Nine Steps Approach to Analysing Evidence

Having discussed the role of experience it is now appropriate to look in more details at the steps Willingham suggests you take to analyse evidence.  Before we do that it is necessary to define two terms - the Change and the Persuader.

The Change refers to a new curriculum or teaching strategy or software package or school restructuring plan - generically anything than someone is urging you to try as a way to better educate kids.

The Persuader refer(s) to any personal who is urging you to try the Change, whether it's a teacher, administrator, salesperson, or the President of the United States (Willingham, p136)

Willingham's nine steps to analyse evidence are summarised in Table 1.

Table 1 Actions to be taken when analysing evidence (Willingham, 2012 p 205)

Suggested Action

Why You Are Doing This?

Compare the Change’s predicted effects to your experience, but bear in mind whether the outcomes you’re thinking about are ambiguous and ask other people whether they have the same impression.

Your own accumulated experience may be valuable to you, but it is subject to misinterpretation and memory biases.

Evaluate whether or not the change Change could be considered a breakthrough.

If it seems revolutionary, it’s probably wrong.  Unheralded breakthroughs are exceedingly rare in science.

Imagine the opposite outcomes for the Change that you predict.

Sometimes when you imagine ways that an unexpected outcome could happen, it’s easier to see that your expectations was short-sighted It’s a way of counter-acting the confirmation bias.

Ensure that the evidence is not just a fancy labels.

We can be impressed by a technical sounding terms, but it may mean nothing more than an ordinary conversational term.

Ensure that bona fide evidence applies to the Change, not something related to the Change.

Good evidence for a phenomenon related to the Change will sometimes be cited as it proves the change.

Ignore testimonials.

The person believes that the Change worked, but he or she could easily be mistaken.  You can find someone to testify to just about anything?

Ask the Persuader for relevant research.

It’s a starting point to get research articles, and it’s useful to know whether the Persuader is aware of the research.

Look up research on the Internet.

The Persuader is not going to give you everything.

Evaluate what was measured, what was compared, how many kids were tested, and how much the Change helped.

The first two items get at how relevant the research really is to your interests.  The second two items get at how important the results are.

We know need to turn to the role of practical significance in determining how usefulness of research evidence for your practice.

Practical significance

In reading research articles you will come across the terms statistical significance and effect size.  Coe (2002) argues there is a difference between significance and statistical significance. Statistical significance means that you are justified in thinking that that the difference between the two groups is not just an accident of sampling. Effect size, on the other hand, is a way of measuring the extent of a difference between two groups (Higgins et al 2013). Now if we combine both effect size and statistical significance, this helps get a sense of the notion of practical significance of a change or intervention.

If the confidence interval includes zero, then the effect size would be considered not to have reached conventional statistical significance. The advantage of reporting effect size with a confidence interval is that it lets you judge the size of the effect first and then decide the meaning of conventional statistical significance. So a small study with an effect size of 0.8, but with a confidence interval which includes zero, might be more interesting educationally that a larger study with a negligible effect of 0.01, but which is statistically significant. (Higgins et al p6)

In other words, as Willingham states, practical significance refers to whether not that difference is something you care about (p 203). And, as such, requires you the reader to make a judgement call. Now making judgement calls about research evidence which you are not sure about is never easy. However, Willingham suggests three approaches to tackle the issue. 
  • Make a mental note that you think the research maybe of practical significance. 
  • If you have the opportunity raise the matter with the Persuader. 
  • Ask how does the practical significance of the Change relate to your goals and what you are trying to achieve. Is the improvement which is being offered consistent with your objectives and the resources available to achieve them?
The next step

Having - flipped, stripped, traced and analysed the evidence - the next step is to consider whether the change should be adopted and that will be focus of a forthcoming post.


Coe, R. (2002) It's the Effect Size, Stupid What effect size is and why it is importantpaper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12-14 September 2002
Bennett, T. 2013. Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it. London: Routledge.
Higgins S., Katsipataki, M. Kokotsaki, Coe R., Elliot Major, L.  and  Coleman, R. (2013)  The Sutton Trust-Education Endowment Foundation Teaching and Learning Toolkit: Technical Appendices
Willingham, D. (2012) When Can You Trust The Experts: How to tell good science from bad in education, Jossey-Bass, San Francisco.

Friday, 4 September 2015

researchED London 2015 and knowing when to trust the experts

In 19th century North America salesmen travelled across the continent promising wonder cures for all manners of ailments.  However, once the 'miracle-cure' was purchased and the salesman was onto his next town, his or her customers soon found they had been hoodwinked and the miracle-cure was nothing of the kind.  Snake-oil was one of the most commonly sold miracles cures and the term 'snake-oil' is still used today to describe so-called miracle-cures.  Indeed, in 21st century England, there are still the more than occasional sighting in schools of snake-oil salesmen, this time in the form of consultants, peddling the latest intervention which will transform pupil learning.  And despite Tom Bennett and Helen Galdin O'Shea's best efforts, one or two may have sneaked into researchED London 2015.  I really hope they haven't - but just in case they have - I've written this post.

So what are we to do?  Fortunately, Daniel Willingham  in his 2012 book,  When Can You Trust the Experts: How to tell good science from bad in education,  provides us with guidance in how to work-out  whether to believe what a so-called expert (consultant, ex-headteacher, educational researcher) says about a subject just because of his or her authority.  In particular, Willingham identifies a number of ways you can go wrong when you trust an authority.  But first, let's look at the structure of the argument as to whether claim is supported by 'scientific' method.

What happens when we believe an authority

Adapting Willingham (p170) -   say you - the researchED delegate  - don't understand the science behind an idea or claim made by an educational researcher/expert/researchEd speaker.  But you believe that the educational researcher/expert does.  You know that the educational researcher is saying the research evidence supports the idea or claim being made.  So you - the researchED delegate - without understanding the underpinning research, trust that the evidence supports the claim.  In other words, as the educational researcher/researchED speaker is an expert on the evidence, when the researchers says something about the evidence, you - the researchED delegate - are more likely to believe it.

However, there are a number of situations when this argument can go wrong - again adapted from Willingham.
  • What we take to be signs of authority - acadamic pre-eminence, experience, twitter profile or educational columnist - turn out not to be very reliable, and the person is not, in fact, scientifically knowledgeable
  • We might arrive at a false belief because we misundertand the position taken the academic, tweeter, consultant or columnist
  • The 'authority' might be knowledgeable, but be in error takes up position on a topic outside his or her topic of expertise
  • Two equally credible authorities might disagree on an issue, leaving it unclear what to do and how to proceed (adapted from Willingham p 178)
The School Research Lead's dilemma

This can be best summarised by F Scott Fitzgerald who said The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.  In this case the two opposing ideas are : I do not possess the expertise of an educational researcher, so I need to rely on those who do.  On the other hand, just because someone would appear to be a credible expert there are plenty of reasons not to believe such experts and authorities.  So what are we to do?  Unfortunately, and there's no other way round it, you the researchED delegate need to evaluate the strength of the evidence yourselves. This does not mean you have to become an instant expert in either qualitative or quantitative methods, but it does mean you have to have a list of questions which help unpack and understand educational research.  And in the case of researchED 2015 - make sure any speaker you listen to references a paper/book/article - where you can really unpick and analyse their argument.  If they haven't got a paper etc - then it's probably best to move onto someone who has.

Willingham, D. (2012) When Can You Trust The Experts: How to tell good science from bad in education, Jossey-Bass, San Francisco.