Friday 29 April 2016

Do you need to use a STICC to communicate

If you are a teacher, school-research lead or headteacher seeking to improve your decision-making, then this post is for you.  In this post – the first of two on this topic -  I will refer to the work of Hargreaves and Fullan (2012) on decisional capital.  I will then go onto examine different types of problems – be they crises, tame or wicked – and begin to explore the implications for decision-making.   I will then briefly examine the STICC communication and decision-making protocol and apply it to a school example.  A subsequent post will look at work in the health-professions and the competences required for shareddecision-making with patients.  Subsequently I will consider the implication of this model shared decision0making for both evidence-based practice and decision-making within schools.

What is decisional capital? 

Hargreaves and Fullan (2012) argue that the professional capital of school is made up of three forms of capital: human, social and decisional capital.  Hargreaves and Fullan go on to argue that the very essence of a being a professional is the ability to make discretionary judgements.   Building upon the idea of decisional capital from case law, Hargreaves and Fullan define decisional capital as: the capital that professionals acquire and accumulate through structured and unstructured experience, practice and reflection – capital that enables them to make wise judgments in circumstances where there is no fixed rule or incontrovertible evidence to guide them.  Decisional capital is enhanced by drawing on the insights and experiences of colleagues in forming judgements over many occasions.  In other words, in teaching and other professions social capital is actually an integral part of decisional capital, as well as an addition to it.  (p93)

To develop decisional capital, Hargreaves and Fullan argue that its important for teachers and headteachers to practice their decision-making skills. Hargreaves and Fullan cites the 10,000 hour rule for becoming expert or proficient, and which has gained its prominence through the work of Malcolm Gladwell and his 2008 book Outliers.   Gladwell draws attention to  the work of Anders Ericsson and his colleagues who argue that even the most gifted performers required 10,000 hours of deliberate practice.  Indeed, Hargreaves and Fullan argue that it may take a teacher or headteacher 8 to 10 years of experience before they become a ‘maestro’.  Nevertheless, it is important to note that recent research by Hambrick et al (2014) reported  that only a third of differences in performance, in elite music and chess, can be explained by accumulated difference in deliberate practice.

Hargreaves and Fullan go onto argue that : Decisional capital is also sharpened when it is mediated through the interaction with colleagues (social capital).  The decisions get better and better.  High yield strategies become more precise and more embedded when they are developed and deployed in terms that are constantly refining and interpreting them.  At the same time, poor judgments and ineffective practices get discarded along the way.  And when clear evidence is lacking or conflicting, accumulated experience carries much more weight than idiosyncratic experience or little experience at all. (p96)

However, whilst Hargreaves and Fullan identify the importance of decisional capital, they choose not to explore the different types problems which decision-makers face,  and which may require different approaches to the accumulation of decisional capital.  This next section will begin to explore a classification of different types of problems, whilst at the same offering a protocol for engaging with colleagues in the communication and decision-making process.

Different types of problems and associated decision-making processes

There is no doubt that different types of teaching, learning and managerial problems require different forms of decision-making.  Grint (2007) describes how problems can be described as critical, tame and wicked, with each type of problem lending itself to different type of decision-making process.  Critical problems and emergencies may require leaders to provide immediate answers.  Tame problems – which may have been seen before – may require more managerial, process led responses.  Whereas wicked problems – which may not have been seen before, have no stopping point or definition of success, and different stakeholders may various definitions of success – may require more collaborative and collegial approaches.  The relationship between about the increasing uncertainty about the solution to a problem and the increasing requirement for collaboration is illustrated in the following figure
















Taken from Grint (2008) page 16

Increasing decisional capital through a communication protocol – the STICC model

One of the challenges of decision-making and is ensuring people know what they are doing and why they are doing, and this is often true  in ‘crises situations’ where the decision-maker may lack all the information at hand.  Weick and Sutcliffe (2007) cite the work of Klein (1999) who developed in conjunction with emergency services – such as forest-fire fighters – a five step briefing protocol called STICC.  Weick and Sutcliffe (2007) describe the protocol as follows:
  • Situation : Here’s what I think we face
  • Task : Here’s what I think we should 
  • Intent : Here’s why I think this is what we should do
  • Concern : Here’s what we should keep our eye on because if that changes, were in a whole new situation
  • Calibrate : Now talk to me.  Tell me if you don’t understand, cannot do it, or see something I do not. (Weick and Sutcliffe,p 156)
The value of this model is not limited to 'crises' and can be used for briefing purposes before decisions are made, so now let's examine a worked example of the STICC protocol in use.

XYZ School and the STICC protocol

XYZ school is a Teaching School, which has 15 whole-school initiatives detailed within the school development plan.  The school had its last OfSTED visit in 2009, where it was graded as Outstanding, and the senior team – and in particular the Headteacher – is determined that the school leaves no stone unturned as it continually seeks to maintain its’ reputation – local, regionally and nationally.  As a result, the school is constantly taking on board new initiatives, which are loosely connected and often not supported by robust evaluation evidence

However, the senior leadership team are aware this number of initiatives is placing high levels of stress onto the teaching staff, and this is now being reflected in abnormally levels of absence, and with the first signs of increasing staff turnover, particularly among younger and newly qualified staff.

This is how the STICC model could be used to brief the Headteacher

Situation : Here’s what I think we face

Headteacher, we are beginning to see both high levels of staff absence and increasing and staff turnover, which would appear to be linked to the number of initiatives and innovations which the school is pursuing.

Task : Here’s what I think we should 

We should work with the Governing Body and the Heads of Department to identify no more that five initiatives, which are carried forward into the next academic year.  In addition, no more initiatives should incorporated into the school development plan, unless there is substantive evaluative evidence supporting its effectiveness and there is a a clear school need to be met.

Intent : Here’s why I think this is what we should do

The reason I am making this recommendation, is that I am concerned with the impact of the large number of initiatives on staff absence, morale and culture, which  may lead to use losing some of our very best staff, particularly in shortage subject areas.  In addition, by taking on too many initiatives we are risk of spreading ourselves too thinly and not doing things as well as we should.

Concern : Here’s what we should keep our eye on because if that changes, were in a whole new situation

Of course, we will need to look out for new initiatives that we well need to take in response to any changes in legislation or any new problems or issues that have emerged in the school.  We will also need to work with colleagues, so we can demonstrate the rationale for the reduction in the number of initiatives, and will need to support colleagues who ‘own’ initiatives and which are no longer supported

Calibrate : Now talk to me.  Tell me if you don’t understand, cannot do it, or see something I do not

Tell me if my analysis is incorrect, are there some things I have missed out or have got wrong.  Are there areas of agreement.  Am I recommending something we can’t or should not do?

The value of this protocol is several fold.  First, in terms it provides an easily remembered structure for explaining a situation , whilst at the same time providing an opportunity for feedback from colleagues.  Second, it provides a mechanism for checking  whether what is being suggested is feasible.  Third, it provides team members with permission to raise issues which may have been omitted. Finally, it provides a useful structure to develop leaders so they can gain deliberate practice in briefing colleagues.

Some final words

This post has argued that robust communication and decision-making protocols – which have their origins outside of education – can be  useful in helping school leaders increase decisional capital.  In my next post, I will explore how school leaders can benefit from decision protocols  which have developed in the health professions – re decision shared decision-making with patients, which may be of value in making shared decisions with colleagues.

References

Grint, K. (2007). Leadership, management and command: rethinking D-Day. Springer

Hambrick, D. Z., Oswald, F. L., Altmann, E. M., Meinz, E. J., Gobet, F., & Campitelli, G. (2014). Deliberate practice: Is that all it takes to become an expert?. Intelligence, 45, 34-45.

Hargreaves, A and Fullan, M. 2012. Professional Capital: Transforming, teaching in every school. Routledge, Abingdon

Klein, G. (1999). Sources of power: How people make decisions. MIT press.

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in an Age of Uncertainty. Jossey Basss

Friday 22 April 2016

Is it time to call time on Hattie and the significance of the 0.4 effect size?

If you are an evidence-based practitioner, school-research lead or headteacher and interested in being able to interpret effect sizes, then this post is for you.  In this post I will: first, briefly show how effect sizes are calculated; second, identify some of the most common interpretations of the size of effect sizes; third, summarise Robert Slavin's recent post on what do we mean by a large effect size; and finally, consider the implications of the discussion for those interested in supporting evidence-based practice within schools and the work of John Hattie

What do we mean by effect size?

Put quite simply, effect size is a way of quantifying the difference between two groups and is calculated using the following calculation.

Effect Size = (Mean of experimental group) - (Mean of the control group)
                                                Standard Deviation

How can we interpret effect sizes?

The most well known interpretation has been put forward by John Hattie in Visible Learning who having reviewed over 800 meta-studies argues that the average effect size of a range is educational strategies being 0.4.  On the basis of this, Hattie argues that teachers should select those strategies that have an above average effect on pupil and student outcomes.

Secondly, we could turn to EEF’s DIY Evaluation Guide written by Rob Coe and Stuart Kime, where on pages 17 and 18 they provide some guidance on the interpretation of effect sizes (-0.01 to 0.18 low, 0.19 - 0.44 moderate, 0.45 - 0.69, high, 0.7 + very high) and with effect sizes being converted into months of progress.

Alternatively, if you are interested in the relationship between effect sizes and GCSE grades, you could turn to Coe (2002) where he notes the distribution of GCSE grades in compulsory subjects (i.e. Maths and English ) have standard deviations of between 1.5 – 1.8 grades.  As such,  an improvement of one GCSE grade represents an effect size of 0.5 – 0.7.   So a teaching  intervention which led to an effect size of 0.6 would lead to each pupil improving by approximately one GCSE grade.

How large is an effect size? - a recent analysis

Slavin (2016) recently published an analysis of effect sizes which challenges these interpretations as to what is a large effect size. Slavin argues that what is a large effect size depends on two factors: sample size and student assignment to treatment or controlled groups (was it done randomly or through a matching process). This conclusion is based on a review of twelve meta-analyses and the 611 studies which met the rigorous standards necessary for inclusion in the John Hopkins University School of Education BestEvidence-Encyclopedia. The results of this analysis are as follows:


Small
Large
Matched
+0.32
+0.17
N (studies)
(215)
(209)
Random
+0.22
+0.11
N (studies)
(100)
(87)

One way of interpreting the above table is to say that if we take matched samples (424 studies in total), the average effect size for studies with less than 250 participants (0.32) is nearly twice the size of the effect size in studies of 250 or more participants (0.17).  Alternatively, small studies using random sample are likely to generate an effect size (0.22) which is twice that of larger studies (0.11).

So what are the implications of Slavin's analysis for the evidence-based practitioner?

First, Slavin argues - within Hattie's Visible Learning - there are a large number of studies which do not meet the requirements of the Best-Evidence Encyclopedia and should not be included in any calculation of the effectiveness or otherwise of educational interventions.

Second, once having removed insufficiently rigorous studies from the calculation of  Hattie's league table of effect sizes, this league table should be sub-divided into four separate tables - which depend upon the size of the sample (large or small)  and the nature of the sample (random or matched).

Third, the 0.4 hinge point which Hattie suggests teachers and headteachers use to identify those strategies with proven effectiveness, is in all likelihood incorrect and should not be used as screening mechanism for identifying strategies to introduce into a school.  Indeed, Slavin's work suggests the need for multiple hinge-points.

Fourth, the EEF table used for the interpretation of effect-size needs to be re-calibrated, to reflect the impact of sample sizes and random sampling/matching  on average effect size.  In other words, what is a large-effect size, would now appear to be smaller, as effect sizes are unlikely to be as large as anticipated, particularly in large multi-school studies involving more than 250 pupils.  This is a particularly urgent, as there is likely to be a number of schools who are currently using the EEF DIY evaluation tool-kit as a guide to practice, and the current guidelines for interpreting effect sizes may lead to some interventions being mis-classified as having relatively small effect sizes.

And finally ...

Where does this leave us, particularly with regard to John Hattie and Visible Learning.  Well for me, I think it would be difficult to justify the  use within a school of Hattie's league table of effective strategies to determine either changes in teaching strategies or the introduction of school-wide interventions.  What I think you can do is use Visible Learning to demonstrate the challenges and limitations associated with research-based teaching.  In the other words, the benefits in critically using Hattie's work within school are to do with building professional capital rather than as a tool for prioritising interventions.   If anything, the difficulties arising from Hattie's work suggest an even greater need for teachers to become effective evidence-based practitioners, who are able combine the different sources of evidence - research, school-data, stakeholder views, practitioner expertise - to make decisions which will hopefully lead to improved pupil outcomes and staff well-being

Note
I have not deconstructed Hattie's use of effect sizes - this has been more than ably done by Ollie Orange





Saturday 16 April 2016

The role of research evidence in educational improvement

If you are a teacher, school research lead or headteacher interested in the role of research evidence in  educational improvement, then this post is for you.  The recent White Paper places great store on the role of evidence in improving teaching and learning and this post, drawing upon the work of Hargreaves and Stone-Johnson (2009) will: first,  dentify some of the complexities associated with the relationship between research/evidence-based practice and educational improvement; second, consider the implications for evidence-based change of the different conceptions of the practice of teaching; third, explore the implications for you in your role - teacher, school research lead or head-teacher as you seek to bring about evidence-based educational improvement.

Complexities in evidence-based practice in educational improvement

Hargreaves and Stone Johnson argue that the debate about the use of research evidence* in educational improvement is particularly complex for two reasons.  First, the extent to which stronger and better evidence might advance the quality of teaching and its impact depends upon the nature of the evidence itself - on how it is defined, used, interpreted, and made legitimate.  Second, the relevance and impact of evidence-based practice in teaching also depends on what conceptions of teaching and learning are being employed, how evidences applies to each of them, and how these conceptions shape the conditions under which evidence itself will be used or misused. (p90)

Brown (2015) goes onto expand upon these areas of controversy and debate:
  • the epistemological differences between academic researchers and policy-makers in terms of what counts as evidence, the quality of evidence and what evidence can or can't tell us;
  • whether the evidence-informed movement serves to work agains the practitioners' professional judgement;
  • issues in relations to how formal academic knowledge and professional or tacit knowledge might be effectively combined;
  • differentials in power that can affect or limit interactions between teachers or policy-makers and research/ers;
  • controversies in relation to to some of the methods commonly associated with enhancing evidence uses;
  • how the capacity to engage with academic research might be enhanced;
  • issues such as the inaccessibility of research to teachers and policy-makers, both in terms of where it is published and the language that is typically used in such publications. (adapted from p1)
In addition, Hargreaves and Fullan (2012) also provide a number of reasons why the case for the use of research-based evidence in decision-making can be overstated:
  • evidence-based decisions can be tainted with self-interest;
  • cast-iron evidence can get rusty later on;
  • evidence-based principles are used very selectively;
  • evidence isn't always self-evident;
  • evidence on what to changes isn't the same as how to change;
  • positive initiatives based on evidence in one area can inflict collateral damage;
  • people can cook the data; 
  • evidence-based teaching is only somewhat like evidence-based medicine;
  • evidence comes from experience as well as research. (adapted from p47)

The Practices of Teaching 

Hargreaves and Stone Johnson state : Teaching - like medicine, law, or dentistry - is a professional practice.  It consists of systematically organised and thoughtfully as we as ethically grounded activities among a community of professional that is dedicated to the service of others.  

The practice of teaching is complex, It is not mechanical or predictable. Nor does it follow simple rules. Indeed teaching is an assemblage of many practices, each affecting, drawing on, or intersecting with the others.  Each of these practice relates to evidence in its own distinctive way. (p91)

Table 1 seeks to summarise - as identified by Harris and Stone-Johnson -  the relationship between the different practices of teaching and the role of research evidence in evidence-informed change.


Practice of teaching
Description
Role of evidence in evidence-informed change
Technical
Teaching involves the mastery and employment of a technical skill
Evidence could show how differing teacher behaviours effect on student outcomes, though perspective underplays the complexity and challenge of teachers’ work.  Approach needs to reflect deep understanding of evidence-based medicine and practice and role of practitioner expertise and patient values.
Intellectual
Teaching involves increasingly complex work that is highly cognitive and intellectual
Evidence provides a source for improving student learning through enhanced teacher learning about effects of their teaching; strengths and needs of their students; and alternative strategies that have externally validated record of success.
Experiential
Teaching understandings of their problems deeper than offered by theorist, teachers can provide common-sense insight
Evidence provides legitimate but imperfect basis for professional judgment and knowledge.  Practical experience is as important as research-driven knowledge. Validity of teacher knowledge depends upon the conditions in which it is produced as well as the processes by which it is validated.  Teachers need to become adaptive experts who actively seek to check existing practices and have a disposition towards career-long professional lean
Emotional
Teaching is an embedded practice that produces emotional alteration in the stream of experience – giving emotional culmination to thoughts, feelings and actions (amended from Denzin)
Evidence-based changes need to include emotional goals and processes of learning (empathy, resilience, self-esteem) as well as emotional conditions for learning (safety and security).  Evidence-based education must go through process which strengthens the relationships of the groups and communities that produce it.  
Moral and Ethical
Teaching is never amoral – it always involved ethical and moral practices, either in a good or bade Teachers promote and produce virtues such as justice, fairness, respect and responsibility
Requires judgment about how evidence is produced, used and interpreted.  Colleagues to hold others to account within professional learning communities for the integrity of their practice and the evidence supporting their work.
Political
Teaching always in some measures involves a relationship of power
Given selective use of evidence by government essential that teachers have the professional capacity to review, critique, make informed decisions, and adapt the evidence accordingly.
Situated
Teaching varies in what is taught, who is taught, and how learning is assessed
For evidence-based improvement to be effective contextual contingencies need to be embraced, with realistic timelines for change
Cultural
As teaching practices become ingrained and accepted – they form part of the professional culture of teaching ie the attitudes, beliefs, values and the patterns of relationship between teachers
Evidence-based educational initiatives require significant investment in the culture building process. They can also push collaborative teachers cultures to focus more persistently on the student learning needs, especially when this might create professional discomfort for teachers themselves.  A systemmatic connection between culture and evidence-based inquiry in caring and trusting relationships of ethical integrity is at the heart of the one of the most powerful principles of implementing evidence-based improvement: professional learning communities (Summarised and amended from Hargreaves and Stone-Johnson p91 - p103)



What are the implications of for your role as a leader of research evidence-informed change?

First, given the various of practices of teaching it is important to reflect on which of practices most closely mirrors your own view of teaching - and reflect on how that view will impact upon on your approach to developing research/evidence-informed practice within your school.  In particular, it is essential that you challenge your own 'practice of teaching' and its relationship with evidence and acknowledge the inherent weaknesses of your approach

Second, given that within your school there will be a diverse range of views on the practice of teaching, it may be necessary to do the following.  Identify which model or models colleagues are using to describe their practice and then adapt your use of 'evidence' to one which is consistent to that paradigm.  In doing so, this may provide you with the opportunity to engage in on-going dialogue rather than engage in the 'dialogue of the deaf.' If you do this, you will keep the conversation going and hopefully each of you will learn from one another.

Third, leading evidence-based or evidence-informed change is clearly a complex and challenging space and is will require a significant personal investment of time, no little skill, and large dollops of patience.  As such, given the challenges and complexities of evidence-based/informed change it would be wrong to expect evidence-based/informed practices to provide 'wonder-cures' for a school's 'ailments.'  (If it did, I'd like to see the evidence).

And finally*

This post has focused on the use of research evidence in evidence-based change, so remember evidence-based practitioners draw upon four sources of evidence - research, school data, stakeholder views and practitioner expertise. Research evidence is not the only evidential fruit.

References

Brown, C (2015) Evidence-Informed Policy and Practice in Education : A sociological grounding.  Bloomsbury. London

Hargreaves, A., & Stone-Johnson, C. (2009). Evidence-informed change and the practice of teaching. The role of research in educational improvement, 89-109.

Hargreaves, A and Fullan, M. (2012) Professional Capital: Transforming, teaching in every school. Routledge, Abingdon