Friday, 19 January 2018

The School Research Lead and Journal Clubs - Do we need a logic model?

There is currently much interest in the use of journal clubs within schools.  For example, both the Blackpool (St Mary’s Catholic Academy) and Durrington Research Schools are currently promoting the use of journal clubs in their schools.   In addition, we have the Chartered College of Teaching operating a ‘virtual journal club’ through its monthly book club.   However, it should be noted that this is nothing new as Beth Giddins has been promoting the use of journal clubs since 2015 (https://www.edujournalclub.com/journal-clubs/)    Nevertheless, there is very little research available on the effective use of journal clubs within schools, with (Sims, Moss, & Marshall, 2017) being a notable exception.   With that in mind, this post, will be the first in a series of blogposts which looks at the of journal clubs.  In doing so, I will be drawing upon a number of systematic reviews in medicine and health settings which look at how effective are journal clubs in supporting both continuous professional development and evidence-based decision-making (Harris et al., 2011) and (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008).  However, the first post in the series will briefly examine a possible logic model/framework for use with teacher journal clubs.

Logic models

Put simply, a logic model graphically illustrates the components of an intervention in terms of inputs, output and outcomes.  Figure 1 illustrates how the various elements of a very simple logic model come together.   The inputs represent the resources that are put into the programme or intervention, money, time and skills.    The outputs, represent what is done, the activities associated with the programme and who the programme reaches.   Finally, the outcomes are the changes and benefits (and possibly, costs) which accrue in the short, medium and long-term, for examples change in teacher knowledge and skills, application in the classroom and improvements in pupil learning.

For a detailed explanation of logic models have a look at Better Evaluation 
Teacher Journal Clubs - A logic model

Adapting the work of (Harris et al, 2011) it's possible to be come up with a detailed logic model for for how a teacher journal club might work. 

The benefits of using a logic model

The benefits of developing a logic model can be found in more detail at Community Toolbox but for the purposes of our discussion the benefits of logic models.  
  • Logic models integrate planning, implementation, and evaluation. In other words, developing a logic model will give you a greater understanding of what needs to be done to make the innovation work, and at the same time gives you a framework for evaluation
  • Logic models help you make good matches between activities and effects.  By developing a logic model for a journal club it can held you spot those intended activities with no supporting activities and resources, and then make the suitable adjustments.
  • Logic models can help in the collaborative planning process.  The development a logic model is an iterative process and by working together this can help build a shared understanding of what needs to be done to make an intervention work.  It is also helpful when you are looking to disseminate an intervention within or between schools.
  • Logic model can help keep a focus on accountability and outcomes.  In schools where resources are increasingly scare a logic model can keep a focus on the outcomes of an intervention and whether the planned for outcomes are actually happening.  Hopefully, this will allow further r resources to be allocated when the journal club proves a success. 

However, as noted at Community Toolbox - logic models can have a number of weaknesses.
  • A logic model needs to be 'logical'.  If it is not, this will no doubt cause problems for colleagues seeking to implement the innovation
  • A logic model cannot capture all the variables and elements at work when trying to make an intervention work - so the logic model may move from being 'simple' to being 'simplistic'
  • A logic model can be both difficult and time consuming to create.  So there needs to be a clear trade-off between the time and effort put into creating the logic model and the subsequent benefits 
And finally

Next week I'll look at some of the evidence about what needs to be done to make sure your journal club is a success.



Saturday, 6 January 2018

Measuring the impact of Lesson Study in Schools

The end of 2017 saw a considerable amount of discussion about the impact of lesson study, which was in large part prompted by the EEF Evaluation Report on lesson study - which suggested it had no impact on pupil learning outcomes. However, this raises all sorts of questions about how to go about measuring the impact of lesson study. So to help with this task - Sarah Seleznyov of the UCL IOE has written a guest post on how schools can go about measuring the impact of lesson study.

Guest post by Sarah Seleznyov - Measuring the impact of lesson study in schools

'If we are going to measure the impact of lesson study, we need first to be sure of what it is and what its expected outcomes are. Based on this understanding, we then need to decide what it is important to evaluate and measure.

This is trickier to do than it might seem, since there is very little literature from Japanese authors on lesson study accessible to English-language speakers. Based on an extensive analysis of the literature, and ongoing dialogue with Tokyo Gakugei University professors, listed below are what I believe to be the critical components of the lesson study process:

1. Identify focus


Teachers compare long-term goals for student learning and development to current students’ current learning characteristics in order to identify a school-wide research theme

2. Planning

Teachers work in collaborative groups to carry out kyozai kenkyu (study of material relevant to the research theme). This study leads to the production of a collaboratively written plan for a research lesson. This detailed plan attempts to anticipate pupil responses, misconceptions and successes for the lesson.

3. Research lesson

The research lesson is taught by one teacher, who is a member of the collaborative planning group. Other members of the group act as silent observers, collecting evidence of pupil learning.

4. Post-lesson discussion

The collaborative group meet to discuss the evidence gathered. Their learning in relation to the research theme is identified and recorded. It is intended that this learning informs subsequent cycles of research.

5. Repeated cycles of research

Subsequent research lessons are planned and taught that draw on the findings from the post-lesson discussions.

6. Mobilising knowledge
Opportunities should be created for teachers working in one lesson study group to access and use the knowledge from other groups, either through observing other groups’ research lessons or through the publication of group findings.

7. Outside expertise

Where possible, there should be input from a koshi or ‘outside expert’ involved in the planning process and/or the research lesson. (Seleznyov, S. (forthcoming))

And what are the expected outcomes of teacher participation in lesson study? Lesson study aims to enable teachers to make positive changes to their day-to-day classroom practices so that improvements to pupil achievement are sustained beyond the lesson study process. It is not intended as a quick-fix problem solving approach to teaching and learning, nor a short term professional learning project.

In England, there is overwhelming pressure on schools and school leaders to provide evidence of the impact of any intervention that is intended to improve outcomes for pupils. This evidence of impact is expected to be within what is in research terms a very short time frame: a year, or two years at the most, and must be evidenced in terms of pupil outcomes. Lesson study experts, however, describe lesson study as a focus on the development of expertise over decades, not months.

In line with this focus on the long term development of teacher expertise, we advise teachers not to expect to see an impact on pupil learning after one, two or even three research lesson cycles. This would align with other literature on the development of teacher expertise, eg Hattie, who advises caution in judging ‘expert teachers’ using simplistic assessment measures such as tests, which can only measure improvements in shallow learning.

However, we can anticipate that qualitative changes to some of the ‘soft’ aspects of pupil learning (for example, engagement, resilience, perseverance) will lead to quantitative changes in attainment and progress in the long term (Gutman, L.M. and Schoon, I., 2013). Teachers engaged in lesson study can look for key aspects of the pupils’ learning that are likely to lead to the biggest changes in their progress over time and can see the research lessons as a vehicle to gain insight into these aspects.

When evaluating lesson study, school leaders should consider gathering evidence against the full logical chain that might enable changes to pupil outcomes. Our logical chain (Godrey et al., forthcoming) is based on the work of Guskey, who describes five different levels at which the effectiveness of professional development can be measured:

1. Teachers’ reactions

Teachers’ attitudes to and enjoyment of professional learning might improve. Did teachers like lesson study? How was it different to / better than the other professional development they experience?

2. Teachers’ professional learning

Teachers might report improved subject knowledge, pedagogical content knowledge and self-confidence. What did teachers learn? How did lesson study enable them to learn this?

3. The organisation’s professional development model

The structure, time, resourcing of the school’s professional learning programme might alter in order to accommodate lesson study. Cultural attitudes towards professional learning might shift in relation to: the role of peer-to-peer learning, teacher ownership of learning, lesson observation as learning not performance. Did we as leaders do enough to enable the lesson study to be of as high a quality as possible? Has teachers’ participation in lesson study made them think differently about eg working with other colleagues, lesson observations, etc?

4. Teacher use of new knowledge and skills

Teachers’ newly acquired confidence, subject knowledge and pedagogical content knowledge should lead to changes in practice. Have teachers made any changes to their day-to-day classroom practice based on what they learnt through lesson study? How substantial are these changes to practice?

5. Pupil learning outcomes

Changes in teachers’ practice should lead to improved attitudes to learning and ultimately progress for pupils, in terms of evidence from written work and assessment data. Have teachers noticed any changes to pupil learning, based on the changes they have made to their practice? What is their evidence for claiming this?

School leaders considering how best to evaluate the impact of their school’s lesson study projects should consider which of the above five levels are of most relevance to their own context and seek evidence against those priorities, rather than focusing solely on level five. And think also about how the school’s professional development model supports the effective implementation of lesson study:

• To what extent has your school invested in a long term approach to professional learning and does your proposed cost-impact evaluation for lesson study take this into account?

• How can your impact evaluation support teachers to focus less on the short term quantitative analysis of pupil assessment data and more on the longer term qualitative analysis of pupil learning?

• Can a process be put in place to gather long term evidence of impact on pupil learning?

For more information on our lesson study programmes, to join the UCL Institute of Education Lesson Study Network or to purchase the UCL Lesson Study Handbook, contact: s.seleznyov@ucl.ac.uk'

References:


Godfrey, D., Seleznyov, S., Wollaston, N., and Barrera-Pedamonte, F. (forthcoming). Target oriented lesson study (TOLS) Combining Lesson study with an integrated impact evaluation model.

Seleznyov, S. (forthcoming). Lesson study: an exploration of its translation beyond Japan

Thursday, 14 December 2017

School research lead and simple sabotage

As we approach the end of the calendar year it seems only sensible to have an end of year list.   So in the spirit of 'bah humbug' I thought I'd provide a checklist of those actions where you may have tried to sabotage yourself as school research lead or where others may have tried to undermine you.  This checklist is based on Simple Sabotage by Galford, Frisch and Greene - which in turn is based on guide to sabotage produced by the US Office of Strategic Services in 1944


A checklist of acts of sabotage


Action
You
Others
Insist on doing everything by ‘channels.’ Never permit short-cuts to be taken to expedite decisions


Make speeches. Talk as frequently as possible and at great length.  Illustrate your points by long anecdotes and accounts of personal experience. Never hesitate to make a few comments that .we are doing it for the pupils’


Where possible refer all items to ‘committees’ for further discussion and consideration.  Attempt to make the committees as large as possible – never less than five.


Bring up irrelevant issues as frequently as possible


Haggle over precise wordings of communications, minutes and other documents


Refer back to matters already decided upon at the last meeting and attempt to re-open the question or the advisability of the discussion


Advocate ‘caution’ Be reasonable and urge your colleagues to be ‘reasonable’ and avoid haste which might lead to embarrassment later on


Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the remit of the group or whether it might conflict with the policy of some higher eschelon


CC everyone into to all your emails and discussions



So what are you to do if there are ticks in either column.  Well if it's you sabotaging yourself, the answer is simple STOP doing it - and do something else instead which increases your chance of success, for example, have working groups of four or less.  If it is others trying to sabotage  - call the behaviour out - if a decision has been made and subsequently someone tries to re-open the discussion - just say - 'Thank you for your comments, a decision has already been made and we are moving on' - and then move on

Friday, 8 December 2017

The School Research Lead : How to stop doing what doesn't work

Tuesday 5 December  saw a strange alignment between the real-world and Twitter.  That night saw a Coalition for Evidence-Based Education discuss the notion of strategic abandonment (Thank you @DrCarolineCreaby).  Whilst later in the evening#DebatED  discussed 'whether an interest in education research is more about identifying what doesn't work as suggesting what will'.  This was then following up on Thursday with a #UKEdResChat focussing on 'how do we define 'what works' in educational research? Should we also be focussing on what doesn't work?"

So with that in mind it seems sensible to examine a process for disengaging from strategies and interventions which appear not to be working.   (McGrath, 2011) has identified a disciplined process for getting out of projects and which includes these steps
  1. Decide in advance on periodic checkpoints for determining whether to continue or not
  2. Evaluate the project’s upside against the current estimated costs of continuing.  If it no longer appears that the project will deliver the returns anticipated at the outset, it may be time to stop 
  3. Compare the project with other candidate projects that need resources.  If this one looks less attractive than they do, it may be time to stop? 
  4. Assess whether the project teams may be falling prey to escalations pressures (all we be ok as long as we make the project bigger) 
  5. Involve an objective, informed outsider in the decisions about whether to continue, instead of leaving it up to the project team members
  6. If the decision is made to stop, spell out the reasons clearly
  7. Think though how capabilities and assets developed during the course of the projects might be recouped
  8. Identify all who will be affected by the project’s terminations; draw up a plan to address disappointments or 'damage' they might suffer
  9. Use a symbolic event – a wake, a play, a memorial – to give people closure
  10.  Make sure that that the people involved get a new, equally interesting opportunity p83

 Given what we know about educational research and interventions, it is impossible to avoid things that do not work.  As such the choice is simple - continue with practices and interventions that do not work or release the resources for use in some area where they might.  However, in doing so, it is important to maximise what can be learnt from failure - and which may lead to success next time.

Reference

McGrath, R. (2011). Failing by Design. Harvard Business Review (April, 2011), 77-83.