Thursday, 31 December 2015

Evidence-informed practice : Some common misconceptions - Part Two

In my recent blog post for BERA I identified three of the most common misconceptions about evidence-based/informed practice.  In this post, adapting the work of Barends et al (2014) for a school context, I will highlight a further five misconceptions about evidence-informed practice  which get in the way of effective evidence-informed practice within schools.  In doing so, I hope this will contribute to: better outcomes for pupils; improved opportunities for teacher professional learning; and, the better use of increasingly scare financial resources. So let's now turn to these additional misconceptions.

Misconception 4: Evidence-based practice is all about numbers and statistics.

Evidence-based practice involves seeking out and using the best available evidence from multiple sources. It is not exclusively about numbers and quantitative data, randomized control trials, although many practice decisions involve figures of some sort. You do not need to become a statistician to undertake evidence-based practice, but it does help to have an understanding of basic statistical concepts that are useful to evaluate critically some types of evidence. The principles behind such concepts as sample size, statistical versus practical significance, confidence intervals and effect sizes, can be understood without any mathematics.

Misconception 5: School Leaders need to make decisions quickly and don’t have time for evidence-based practice.

Sometimes evidence-based practice is about taking a moment to reflect on how well the evidence you have can be trusted. More often it is about preparing yourself (and your school ) to make key decisions well – by identifying the best available evidence you need, preferably before you need it. Some management decisions do need to be taken quickly, but even split-second decisions require trustworthy evidence. Making a good, fast decision about when to evacuate a school or how to deal with a critical incidents up-to-date knowledge of emergency procedures and reliable measures providing trustworthy evidence. When important decisions need to be made quickly, an evidence-based practitioner anticipates the kinds of evidence that quality decisions require. The need to make an immediate decision is generally the exception rather than the rule. The vast majority of school management decisions are made over much longer time periods – sometimes weeks or even months – and often require the consideration of legal, financial, strategic, logistical or other school issues, which all takes time. This provides plenty of opportunities to collect and critically evaluate evidence about the nature of the problem and, if there is a problem, the decision most likely to produce the desired outcome. For evidence-based practice, time is not normally a deal breaker.

Misconception 6: Each school is unique, so the usefulness of scientific evidence is limited.

One objection practitioners have to using research evidence is the belief that their school or classroom is unique, suggesting that research findings will simply not apply. Although it is true that schools or classrooms do differ, they also tend to face very similar issues, sometimes repeatedly, and often respond to them in similar ways. Peter Drucker, a seminal management thinker, was perhaps the first to assert that most management issues are ‘repetitions of familiar problems cloaked in the guise of uniqueness’ . The truth of the matter is that it is commonplace for schools to have myths and stories about their own uniqueness.   In reality they tend to be neither exactly alike nor unique, but somewhere in between. Evidence-based practitioners need to be flexible enough to take any such similar-yet-different qualities into account. A thoughtful school leader, for instance, might use financial incentives for business development staff but reward knowledge workers (teachers  with opportunities for development or personally interesting projects, knowing that financial incentives tend to lower performance for knowledge workers (teachers  while increasing the performance of less-skilled workers  .

Misconception 7: If you do not have high-quality evidence, you cannot do anything.

Sometimes there is very little or no quality evidence available. This may be the case with a new management practice or the implementation of new technologies. In some areas the school’s context changes rapidly, which can limit the relevance and applicability of scientific and experiential evidence derived in a context different than that of today. In those cases, the evidence-based school-leader has no other option but to work with the limited evidence at hand and supplement it through learning by doing. This means pilot testing and treating any course of action as a prototype: systematically assess the outcome of the decisions we take through a process of constant experimentation, punctuated by critical reflection and evaluation about which things work and which things do not.

Misconception 8: Good-quality evidence gives you the answer to the problem.

Evidence is not an answer. It does not speak for itself. To make sense of evidence, we need an understanding of the context and a critical mindset. You might take a test and find out you scored 10 points, but if you don’t know the average or total possible score it’s hard to determine whether you did well or not. You may also want to know what doing well on the test actually means. Does it indicate or predict anything important to you and in your context? And why? Your score in the test is meaningless without this additional information. At the same time, evidence is never conclusive. It does not prove things, which means that no piece of evidence can be viewed as a universal or timeless truth. In most cases evidence comes with a large degree of uncertainty. Evidence-based school leaders therefore make decisions not based on conclusive, solid, up-to-date information, but on probabilities, indications and tentative conclusions. Evidence does not tell you what to decide, but it does help you to make a better-informed decision.

Some final words

Let's be clear, the jury is still out on the effectiveness of evidence-informed practice within schools,  although there are reported benefits to practitioners in engaging in evidence-informed practice (Supovitz, 2015).  Fortunately,  there are a number of EEF funded studies, for example – The Rise Project: Evidence – Informed School Improvement; Research into Practice : Evidence Informed CPD; Research Learning Communities; and Evidence for the Frontline – which are considering the impact of evidence-informed practices on teacher development and pupil outcomes, and these projects will begin to report from 2016 onwards.   Nevertheless, we are still waiting for the research evidence on school-based evidence-informed practice, and until that evidence becomes available it is best to proceed with caution and to learn by doing.


Barends, E., Rousseau, D. M., & Briner, R. B. (2014). Evidence-Based Management, The Basic Principles. In Search of Evidence, Center for Evidence-Based Management
Supovitz, J. (2015) Teacher Data Use for Improving Teaching and Learning, in Brown C. (ed) 

Sunday, 20 December 2015

Evidence-based practice and some common misconceptions

Over the last 12 months I have been lucky enough to have the opportunity to read, reflect and write about evidence-based practice and education.  The more and more I have read about the application of evidence-based practice to education, the more it has become clear that there are a number of common misconceptions about evidence-informed practice in circulation.   In my view it’s important to eliminate these misconceptions for three reasons; first, these misconceptions prevent practitioners making the most of the potential of evidence-based practice to improve outcomes for pupils; second, by conflating evidence-based practice with research, teachers are being mistakenly encouraged to be ‘researchers’ rather than ‘evidence-based practitioners seeking to improve their practice; third, not making good use of evidence-based practice can lead to increasingly scare resources being wasted.
But before looking in more detail at the most common misconceptions associated with evidence-based practice it seems appropriate to define the term ‘evidence-based practice.   To help with this task I will used the definition of evidence-based practice put forward by Barends, Rousseau and Briner[1] (2014).

So what are some of the most common misconceptions associated with evidence-based practice within education?

Misconception 1 – Evidence-based practice ignores the expertise and knowledge of teachers and head-teachers.

As Barends et al (2104) clearly state this misconception is contradicted by the definition of evidence-based practice.  Evidence-based practice is about practitioners combining their experience with other sources of evidence in order to make decisions which they hope will bring about better outcomes.  This does not mean that research evidence of ‘what-works’ automatically overrides the practitioner’s experience, although that expertise needs to be both reliable and valid.  In other words, practitioners need to be aware of the limitations of their own experience in the decision-making process, particularly if that experience is not relevant to the task at hand.

Misconception 2 Evidence-based practice is the same as research-informed practice.

As can be seen from the definition, research-informed practice is a subset of evidence-based practice.  Evidence-based practice involves drawing upon evidence from a range of sources be it ‘academic research’, practitioner experience, organisational/school data and the views of stakeholders.  Indeed, there are real risks associated with the term research-informed practice as it creates the impression that research evidence is more valuable than other forms of evidence.  In doing so, it also helps creates the conditions whereby critics of EBP can argue that research is being used to tell practitioners what to do. 

Misconception 3 Evidence-based practice involves teachers undertaking research

Evidence-based practice is not about attempting to create new and generalizable knowledge through the conduct of research.  Rather EBP is about making using the best available current evidence to make decisions in order to improve pupil outcomes.  That does not mean that teachers do not engage in some form of inquiry or evaluation of their impact on pupil outcomes, but the emphasis is on the improvement of both the teacher’s capacity and capability and pupil outcomes.  In other words, EBP is here to help teachers improve, rather than prove.   That said, if teachers are to improve as evidence-based practitioners then a necessary but not sufficient condition will be for teachers to become research literate.

So what are the implications of this analysis for the educational academy

This discussion suggests a number of implications for HEIs.  First, there’s a significant body of academic literature and research on evidence-based practice which appears not to be known to many members of the ‘educational academy’ and is resulting in ill-informed discussion and debate about the merits of evidence-based practice.  Second, one of the roles HEIs should to help provide teachers with both conceptual clarity and accurate, accessible and relevant summaries of the relevant literature.  Third, a major role for HEIs should be about helping teachers acquire the knowledge and skills to be both research literate and critical consumers of research, and that is not the same as training teachers to be researchers. And finally, if HEIS want to help teachers become better evidence-informed practitioners – please open your libraries to local teachers, and let them access the research evidence.

Barends, E., Rousseau, D. M., & Briner, R. B. (2014). Evidence-Based Management, The Basic Principles. In Search of Evidence, Center for Evidence-Based Management

[1] This definition is partly adapted from the Sicily statement of evidence-based practice: Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A., Osborne, J. (2005). Sicily statement on evidence-based practice. BMC Medical Education, Vol. 5 (1).

This post first appeared on the BERA website in December 2015 

Sunday, 13 December 2015

All I want for Christmas is to find out 'what works?'

There's an old, old  joke.  A son asks his father if he would buy him an Action Man for Christmas.  Christmas morning arrives, and the son goes to the Christmas Tree and finds a beautifully wrapped present, which he hopes is his long-wished for Action Man.  The boy quickly rips off the wrapping paper and to his horror finds that his present is just an empty shoe-box.  The boy turns to his father and says : 'Dad, what's this?  And the father replies, " It's an Action Man deserter."

So what's this got to do with finding out 'what works' in education and applying within your school or classroom.  Well for me, so much of the discussion around 'what works' in education, is missing any meaningful discussion of methodology.     So if we want to try and fill this methodological gap then a useful place to start is 'scientific realism', as it provides us with a framework for moving beyond 'what works?'.

The 'scientific realism' approach has its' origins in the work of Pawson and Tilley and their 1997 book 'Realistic Evaluation' and which articulates a number of assumptions about the nature of reality, how cause and effect works and the implications for carrying out evaluations.  Westhrop (2014) provides a very useful summary of five key ideas underpinning realist evaluation.

1.  Realism asserts that both the material and the social worlds are 'real' at least in the sense that anything that can have real effects is itself real.
2.  Realism acknowledges that all enquiry and observation are shaped filtered through the human brain and that there is, therefore, no such thing as 'final' truth or knowledge
3.  Realism argues that all social systems are open systems
4.  Realism offer a particular understanding about how causation works i.e. causal outcomes follow from mechanisms acting in context.
5.  Realism provides a provides a specific way of thinking about 'context' - whether a mechanism 'fires' depends upon the context (Westhrop, 2014 p 4-6)

 Now,  by adopting 'scientific realism'  then the question is now no longer 'what works?'  but instead becomes ' how and why does this work and/or not work, for whom, to what extent, in what respects, in what contexts and over what period?'.  And by asking these questions, we are more likely to get answers to the questions that matters i.e what's likely to work in my school, in my classes and with which students.

This short blogpost cannot hope to cover in any depth the arguments for and against realist evaluation. So instead of asking for a present and getting the metaphorical 'well-wrapped empty-shoe box' - which looks good on the outside but with noting on the inside  - instead ask for a copy Pawson and Tilley's 'Realistic Evaluation'.  It won't disappoint.

Monday, 7 December 2015

I'm 'dreaming' of a White Christmas and a school I want to work in

If you are interested in developing a great school work-place which will contribute to reducing teacher workload, stress and turnover, then this post is for you.  In particular, if you are a senior or middle leader and want to know what staff might want from the ideal work-place then this post may help you give your colleagues the best Christmas they have ever had, the prospect of a great place to work in.  To help create this ideal work-place post will draw upon Rob Goffee and Gareth Jones's recently published book : Why Should Anyone Work Here? What it takes to create an authentic organisation.

So what do you need to do to make sure someone wants to work in your school

Using their research Goffee and Jones have come up with six imperatives for creating the authentic organisation (school) and which forms a useful mnemonic DREAMS
  • Difference: I want to work in a school where I can be myself, where I can express the ways in which I'm different and how I see things differently.
  • Radical honesty: I want to know what's really going on in my school
  • Extra value: I want to work in a school that magnifies my strengths and adds extra value for me and my personal development
  • Authenticity: I want to work in a school that I'm proud of, one that truly stands for something
  • Meaning: I want to work in a school where my day-to-day is meaningful; and
  • Simple rules: I do not want to be hindered by stupid rules or rules that apply to some colleagues and not others (amended from Goffee and Jones, 2015 p12)
So what are the implications for senior and middle school leaders

For purposes of discussion, if we set aside arguments about whether research conducted in a business environment can be used in school-settings, the six simple rules identified by Goffee and Jones can provide a framework for thinking about how things could be different in schools and which both reduces the number of teachers leaving the profession and increases the number of high quality new-entrants.  So here goes:
  • Let teachers express their individuality - acknowledge that there is more than one way of doing something - one lesson plan or scheme of work format - may work for some colleagues - but not for all.  What is matters is whether there is effective planning and pupils are learning.  Not every lesson needs three objectives shared with pupils at the start of a lesson.  Remember, a film has a beginning, middle and an end, but not necessarily in that order
  • Communicate with staff - let them know what's what re examination results.  Is improvement the product of 'gamefication' or genuine improvement.  Be Janus faced - don't spin the message on the inside - outside the school yes, the external reputations has to be managed, but not inside the school.  If there are performance issues, develop the skills to have genuine and open difficult conversations with colleagues
  • Invest in staff CPD - find ways to make it a regular event which is planned and structured and makes a difference to the individual teacher.  Start by thinking whole school approaches to CPD are probably wrong - they may not be wrong - but don't start there.  Help colleagues build on their strengths and don't expose weaknesses - do the things that work, and stop doing things that don't and do something else instead.
  • Identify what your school can be proud of - why the school makes a differences to pupils's lives and how individual members of can be genuinely proud of the contribution they have made.  What is it about your school and learning environment that would make you want to send your own children to your school.    
  • Meaning - make sure that what you do makes sense and has meaning and does not have some flimsy justification - focus on doing  what's right rather than doing things right.  Work on being able to show and demonstrate impact - and that teachers have the tools to show they have made a difference - help teachers know thy impact and meaning.
  • Simple rules - look at school rules and policies  - and come up with something better instead.  Look at assessment policies and ask the question - how will this practice help both pupils improve and my understanding of them.  If it can do neither, scrap it.
So what are the challenges in creating the school of your DREAMS

Goffee and Jones identify a number of generic challenges to creating the school of your DREAMS
  • Remember diversity is not the same as difference.  Gender, ethnicity, race and religion are hugely, hugely important.  But when we talk about difference we are talking about making sure your school can cope with differences in dispositions, perspectives and ways of thinking.
  • Communication is not the same as the weekly school briefing or newsletter, and it should n't be about power and controlling the flow of information.  The rule should be, if in doubt the share the information about.
  • Extra value is not just about supporting teachers, it's about helping everyone who works in the school to become better at their jobs - it's about supporting caretakers, site-managers, cleaners, administrators, teaching assistants become the best they can be.
  • Authenticity is about not about constantly re-writing mission and values statements every school year.  It's about making sure the school is genuinely connected to its past; headteachers behave in a manner which is consistent with the espoused values.  Furthermore its about providing teachers with high levels of trust, with those teachers responding positively to that trust.
  • Meaning is not about trying to generate teacher engagement - it's about generating community, connections and a cause.  Colleagues need to know why their work matters to others.  A contribution to a termly report to Governors does not create a meaningful jobs
  • Simple rules - remember this is not about unfettered teacher autonomy, which will just create another set of problems and probably more rules.  Rules should be created that make sense to colleagues, and which gives them the appropriate discretion to do their jobs.  
And finally....

This is not the type of 'research' I normally report on in this blog.  Although  Goffee and Jones's work is evidence-based - with the rules being derived from interviews, observational data and company workshops -  how that evidence is reported in book, is in my view, inconsistent with the work being described as research.  That said, given the challenges of both teacher retention and recruitment it seemed too worthwhile not to share.

Tuesday, 1 December 2015

The Seven Alternatives to Evidence-Based Teaching - A Christmas Tale

As we approach the end of the autumn term and now that we have entered the month of December, it seems time for a more upbeat and hopefully humorous blog.  Although, I trust still getting across a serious message.  Shamelessly borrowing from the work of Isaac and Fitzgerald (1999) who wrote a tongue in cheek article on the alternatives to evidence based medicine, I'll use the seven 'evidence-based medicine personality types' identified by Scott and Fitzgerald to classify types of evidence-based teachers within school staffrooms.

Eminence-based teaching  - often associated with seniority and the amount of grey-hair and a dismissive attitude to research evidence - and which results in the same mistakes being made year after year - but with increasing levels of confidence.

Vehemence-based teaching - sheer loudness and repetition is used as a substitute for evidence and is exhibited by colleagues incessant posting and hectoring on social media and which reduces colleagues resistance to these loud but ultimately unfounded ideas.

Eloquence-based teaching - these colleagues look the part, smooth, cool and hip - the articulateness and eloquence of their 2000 word blogposts hiding a complete lack of evidence supporting their well written and argued, but ultimately deeply superficial point of view.

Providence-based teaching - these colleagues have no idea of what to do next to help disadvantaged pupils heading for a life on benefits - and their inaction merely increases the certainty of an increased welfare bill.

Diffidence-based teaching - these are colleagues who see a problem but do nothing, making no attempt to try and generate a well formulated and answerable question which might help solve the problems at hand.

Nervousness-based teaching - these are colleagues who are constantly worried about the next lesson observation or OfSTED inspection - and who are testing, oops I mean assessing, students at every opportunity to demonstrate the teacher's commitment to both marking and stress testing the' bag for life'

Confidence-based teaching - reserved for headteachers and other members of the senior leadership team who just walk in and door handle a lesson.

These personality types can be scientifically identified by the use of markers, measuring devices and the appropriate unit of measurement.

Again borrowing shamelessly from Isaac and Fitzgerald the following table should help you identify the various personality types to be  found in your staffroom (and especially last weekend on Twitter)

Table 1 : Basis of teaching practice
Basis for teaching decisions
Measuring device
Unit of measurement
Has read Visible Learning by John Hattie
Number of Hattie references in normal conversations
Radiance of white hair
Optical density
Level of stridency
Number of tweets and unfollows
Still wears a suit and tie
Adhesion score
Level of expertise
Number of bottom sets taught
Level of gloom
Coffee consumption 
Number of sighs per coffee
Requires Improvement Phobia
Every conceivable measure of progress
Number of nights per week seen taking home the bag for life full of marking
No sweat

And me

Over the course of my career I clearly have had multiple personalities: I write blogs about evidence informed practice; what's left of my hair is increasingly grey, no is grey; I constantly tweet; to my everlasting shame there were some students who I did not have the skills or wherewithal to help; I could be miserable; I lived in fear of OfSTED even though I lived and worked in a jurisdiction where OfSTED did not prevail; and, I was a senior leader who at times just got away with a sharp suit, matching shirt, tie and highly polished shoes.  And I can provide evidence for all of them!


Isaacs, D., & Fitzgerald, D. (1999). Seven alternatives to evidence based medicine. Bmj, 319(7225), 1618.

Sunday, 29 November 2015

Is it any good : Merit is not the only fruit

If you are a school research lead, practioner inquirer or someone interested in what works in schools, this post is for you.   In this post,  I will use Stufflebeam and Coryn's (2014) extended definition of evaluation to help examine some of the values associated with in-school evaluation.  I will then go onto consider some of the tasks associated with evaluations and consider the operational implications for the school research lead.
So what do we mean by and extended definition of evaluation?

A useful place to start is Stufflebeam and Coryn's  (2014) extended definition of evaluation, which they describe as:

… the systematic process of delineating, obtaining, reporting, and applying descriptive and judgmental information about some subject’s merit, worth, probity, feasibility, safety, significance, and or equity. (p14)

Stufflebeam and Coryn acknowledge they might have included other values and when conducting an evaluation, as such discussions will need to take place as to whether other values, relevant to the context, should also be include.   That said, it is likely that most evaluations will include some, if not all, of the seven included values.

So what are the extended values?

In a recent post, I discussed the distinction between 'merit' and 'worth', so will only briefly revisit them now.
  • Merit - the intrinsic quality or excellence of the service/programme/project/innovation without reference to costs.
  • Worth - the quality of the item, taking into account both context and costs.
More detailed descriptions of the other five extended values are detailed below.
  • Probity - has the activity/programme under review been conducted honestly and with due regard to ethical considerations such as honesty, integrity and ethical behaviour.  As such evaluators, should check a programme's uncompromising adherence to the highest moral standards and err on the side of too much consideration of probity.
  • Feasibility – does the service/programme consume more resources than available or are there political considerations which make the activity undeliverable.  As such an evaluator's decision may justify the non-continuation of a programme
  • Safety – are those engaging with the service/programme subject vulnerable to harm, for example,  physical or psychological, and is applicable to evaluations in all fields
  • Significance – what is the potential of the service/programme and its importance within a given context.   Sometimes programmes are only relevant in the short-term, or only have local interest.  On the other hand, some programmes have a relevance far beyond the evaluation setting.  As such, a key questions for the evaluator is whether the project/service scaleable and will it work in other different settings.
  • Equity – this can include: provision to all; access for all; equal participation; impact on different groups.  Kellaghan cited y Stufflebeam and Coryn argues that there are seven indicators of the existence of equity:
    1. A societys public educational services will be provided for all people
    2. People from all segments of the society will have equal access to the services
    3. There will be close to equal participation by all groups in the use of the services
    4. Levels of attainments  - for example, years in the education systems - will be substantially the same for different groups
    5. Levels of proficiency in achieving all of the education system's objectives will be equivalent for different groups
    6. Levels of aspiration for life pursuits will be similar across societal groups
    7. The education systems will make similar impacts on improving life accomplishments of all segments of the population (especially ethnic, gender, socio-economic groups) that the education serves.  (Stufflebeam and Coryn, p14)
So how do we operationalise evaluation?

Stufflebeam and Coryn characterise the work of evaluators under four main headings.
  • Delineating – determining key questions, audiences, values, criteria, budget, information sources and where appropriate budget
  • Obtaining – obtaining, aggregating and analysis relevant information
  • Reporting – providing the sponsor, other audiences and stakeholders with feedback about the outcomes of the evaluation
  • Applying – how can the evaluator assist the evaluation sponsor apply the findings of the evaluation
The final feature of Stufflebeam and Coryn’s definition of evaluation to be considered concerns the nature of the information included in the evaluation
  • Descriptive information – this should provide a range of factual statements that ‘objectively’ describe the programme/service.  This could include to whom is the service offered; when was if offered; how many people engaged with the provision; how resourced – human, physical and financial; the cost of the provision
  • Judgmental information  - this includes getting the views of what those involved in the service/innovation/provision think of the ‘quality’ of the service.  This should involve judging the provision against a set of values and criteria
So what are the implications for the school research lead?

For me there are several implications for the school research lead and senior leaders within a school.
  • If the role of the school research lead involves helping colleagues to try work out what works, this should be seen as a necessary but not sufficient condition for evaluations. 
  • The evaluative questions - need to include : what works, for whom, to what extent, in what context etc.
  • Evaluation will require the application of values to the work of colleagues, which will require the evaluator to be particularly skilled in managing the internal politics within a school.
  • If a school is conducting a range of practitioner inquiries, it's important to try and ensure that all types pupils are 'covered' by the inquiries.
  • Be clear whether the evaluation is about bring improvement or providing an overall judgement of the programme/evaluand.
  • Informal evaluations may be useful in generating discussion, but are unlikely to provide sufficient rigorous evidence to justify scaling up informally evaluated programmes within a school.  However, they may be useful within a formative evaluative context.
  • Formal and detailed evaluations are necessary where the outcome of the evaluation is likely to involve a critical operational decision within the school.
Some final words

Although this post has focussed primarily on school evaluation and has put the 'school research lead' in the foreground of the evaluative process, being a skilled evaluator is a responsibility for every teacher within a school.   Indeed, applying the values of the extended definition to current practices may provide a very stimulating 'provocation' which results in changes in practice.


Stufflebeam, D.L. & Coryn, C. (2014). Evaluation Theory, Models & Applications (second edition), Jossey Bass, San Franciscon.