Tuesday, 23 December 2014

Maturity and the case of graded lesson observations - can the FE sector handle the truth!

Last Friday saw the TES publish an article on graded lesson observations within the further education sector where Lorna Fitzjohn, OfSTED's national director for learning and skills stated : “The big question that we’ve had from some leaders in the sector is: is the sector mature enough at this point in time not to have us (OfSTED) grading lesson observations?”  In responding to this statement it seems sensible to ask the following questions:
  1. What does the research evidence imply about the validity and reliability of graded lesson observations?
  2. What is the best evidence-based advice on the use of graded lesson of observations?
  3. What do the answers to the first two questions imply for the maturity of the leadership and management of the further education sector?
What does the research evidence imply about the validity reliability of lesson observations?
      O’Leary (2014) undertook the largest ever study into lesson observation in the English education system and investigated the impact of lesson observations on lecturers working in the FE sector. Lecturers' views when asked about graded lesson observations were summarised as :
Over four fifths (85.2%) disagreed that graded observations were the most effective method of assessing staff competence and performance. A similarly high level of disagreement was recorded in response to whether they were regarded as a reliable indicator of staff performance. However, the highest level of disagreement (over 88%) of all the questions in this section was the response to whether graded observations were considered the fairest way of assessing the competence and performance of staff. 
     Not only are there 'qualitiative' doubts about the reliability and validity of lesson observations, there are also 'quantitative' based objections to their use, particularly when un-trained observers are used. Strong (2011) et al found that the correlation coefficient for untrained observers agreeing on a lesson observation grade was 0.24 which runs counter to sector leaders advice to  both Principals and Governors to 'trust your instincts.'
     Furthermore, Waldegrave and Simons (2014) cite Coe’s synthesis of a number of research studies, which raises serious questions about the validity and reliability of lesson observation grades. When considering value added progress made by students and a lesson observation grade (validity) , Coe states that in the best case there will be only 49% agreement between the two grades and in the worst case there will be only 37% agreement. As for the reliability of grades Coe’s synthesis suggests that in the best case there will be 61% agreement and in the worst care only 45% agreement between two observers.
      The above would suggest graded lesson observations provide an extremely shaky foundation on which to make judgements about the quality of teaching and learning within the further education sector.   It is now appropriate to turn to the current best evidence-based advice on the use of graded lesson observations.

So what is the best evidence-based advice on the use of lesson observations?
     Earlier this year at the Durham University Leadership Conference Rob Coe stated that the evidence suggests
Judgements from lesson observation may be used for low-stakes interpretations (eg to advise on areas for improvement) if at least two observers independently observe a total of at least six lessons, provided those observers have been trained and quality assured by a rigorous process (2-3 days training & exam). High-stakes inference (eg Ofsted grading, competence) should not be based on lesson observation alone, no matter how it is done
      In other words, the current practice of a graded lesson observation system based on one or two observations per year - which is the case in the vast majority of colleges - is not going to provide sufficient evidence for low stakes improvement never mind high stake lecturer evaluation.  That does not mean lesson observations should not take place but as Coe suggests they need to take place alongside reviews of other evidence : such as student feedback, peer-review and student achievement.
So what does the above mean for the leadership and management of the further education sector?
     First, there would appear to be a clear research-user gap, with sector leaders giving out advice which is inconsistent with the best available evidence. Second, if leaders and managers within the sector have to rely upon graded lesson observations to attempt to demonstrate and track improvement of teaching and learning, then the sector needs to begin to develop a wider and more sophisticated range of processes and measures to judge the quality of teaching and learning. Third, the reliance on graded lesson observation suggests a prevailing leadership and management culture which is founded on an assumption that to improve performance requires the removal of poor performers.  For me, Mark Tucker (2013) sums up the way forward

There is a role for teacher evaluation in a sound teacher quality management system, but it is a modest role. The drivers are clear: create a first rate pool from which to select teachers by making teaching a very attractive professional career choice, provide future teachers the kind and quality of education and training we provide our high status professionals, provide teachers a workplace that looks a lot more like a professional practice than the old-style Ford factory, reward our teachers for engaging in the disciplined improvement of their practice for their entire professional careers, and provide the support and trust they will then deserve every step of the way.

And if we take that as the definition of a mature system, then maybe FE has a long way to go.

Coe, R, Lesson Observation: It’s harder than you think, TeachFirst TDT Meeting, 13th January 2014
O’Leary, M. (2014). Lesson observation in England’s Further Education colleges: why isn’t it working and what needs to change? Paper presented at the Research in Post-Compulsory Education Inaugural International Conference: 11th –13th July 2014, Harris Manchester College, Oxford.
Strong, M., Gargani, J. and Hacifazliog, O. (2011). Do We Know a Successful Teacher When We See One? Experiments in the Identification of Effective TeachersJournal of Teacher Education 62(4) 367–382.
Tucker, M. (2013). True or false: Teacher evaluations improve accountability, raise achievement. Top Performers (Education  Week blog), July 18, 2013 http://www.ncee.org/2013/07/true-or-false-teacher-evaluations-improve-accountability-raise-achievement/
Waldegrave, H., and Simons, J. (2014). Watching the Watchmen: The future of school inspections in England, Policy Exchange, London.


Sunday, 14 December 2014

Research Leads Conference and Evidence-Based Practice - How to avoid re-inventing the wheel

On Saturday I had the privilege of attending ResearchED's Research Leads one-day conference.  It was an incredible day full of intellectual challenge mixed with the opportunity to meet some wonderful colleagues.  However, what was deeply ironic given that the event was being held in the Franklin Wilkins building at Kings' College London, and which is named after two of the discoverers of the 'double-helix', was that we were not standing on the 'shoulders of giants'.  Many of the issues and topics which we wrestled with have already, to a very large extent, been engaged with by practitioners in the fields of evidence-based medicine and more generically evidence-based practice.
     So how do we go about standing on the shoulders of giants, particularly with reference to the role of a research-lead in a school and how research-leads could go about performing their task.  Both Alex Quigley and Carl Hendrick did an admirable job in trying to map out the territory for the research-lead and some of the roles and tasks that need to be performed.  For me a fantastic starting point for building upon this discussion is the work of Barends, Rousseau, & Briner's (2014) and their pamphlet on the basic principles of evidence-based management.  Barends* et al state:

From the point of view of a school-research lead this definition has a number of implications.
  1. Being a research-lead involves more than just 'academic' research, it involves drawing upon a wider range of evidence - including both individual school data and the views of stakeholders, such as staff, pupils, parents and the wider community.
  2. This approach can be largely independent of work with higher education institutions.  It is not necessary to partner with a HEI to be an evidence-based practitioner or evidence-based school. It might help, specifically when supporting staff develop as evidence-based practitioner but it's not absolutely necessary.
  3. At the heart of this process is being able to translate school issues or challenges into a well formulated and answerable problem and evidence-based medicine has a number of tools which can help with this process.
      So what does this mean for the research lead's role in a school? Well a highly relevant place to start is to look at those leadership capabilities which appear to have the biggest impact on student and in this case teacher learning.    Given that we a seeking to develop processes which are 'research' led there is no better place to start than the work of Viviane Robinson on Student-Centred Leadership, which is based on both a best-evidence synthesis of the impact of leadership on student achievement, and which identifies three key leadership capabilities: applying relevant knowledge; solving complex problems and building relational trust.  Below I have begun to  tentatively examine what this might mean for research leads, so here goes:
  • Applying relevant knowledge - this about using knowledge about effective research in order to help colleagues become better practitioners, for example, helping to ask well-formulated and answerable questions.
  • Solving complex problems - research is a tricky and slippery topic and it's about discerning the specific research challenges faced by a school and then crafting research processes in order to address those challenges.
  • Building relational trust - which is an absolute pre-requisite for such work.  Research leads need to gain the trust of colleagues, otherwise it will be virtually impossible to develop a research agenda in schools.  And in undertaking this role, this require a specific set of skills which will allow both the research lead and practitioner to engage in effective evidence-based practice.
In order to help with the development of the role of the research-leads the next few posts in this series will draw upon material from both the evidence-based medicine and the broader field of evidence-based practice to look at the following:
  • How to get better at asking well-formulated questions.
  • How to write a short-paper on a Critically Appraised Topic
  • Some of the current challenges faced by evidence-based practitioners, particularly in the field of medicine.
*This definition is partly adapted from the Sicily statement of evidence-based practice: Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A., Osborne, J. (2005). Sicily statement on evidence-based practice. BMC
Medical Education, Vol. 5 (1).
Barends, E., Rousseau, D. M., & Briner, R. B. 2014. Evidence-Based Management : The Basic Principles. Centre for Evidence Based Management (Ed.). Amsterdam.
Robinson, V. (2011) Student-Centered Leadership, Jossey-Bass, San Francisco

Tuesday, 9 December 2014

Dangerous Half Truths and Total Nonsense - Evidence Based Practice and two avoidable mistakes

A few weeks ago I wrote about how difficult it is to learn from others using casual benchmarking and the implications this had for leadership teams who followed the Skills Commissioners advice to visit other colleges.  In this post I continue to draw upon the work of Jeffrey Pfeffer and Robert I Sutton and their 2006 book Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management  and some of the practices which are confused with evidence based management. In addition to casual benchmarking Pfeffer and Sutton identify two other practices - ,  doing what seems to have worked in the past and following deeply held yet unexamined ideologies -  which can lead to both to poor decisions and detrimental outcomes for college stakeholders i.e. staff, students, parents, employers and the community.   These two practices are now explored in more detail.

Doing what seems to have worked in the past
     As colleagues move one from role to another, be it within a school or college the temptation is use that experience and apply it to a new setting.  Pfeffer and Sutton argue that problems arise when the new situation is different from the past and when what we have learned worked in the past, may have actually have been wrong.  There is a huge temptation to import management practices and ideas from one setting to another without sufficient thought about context, either old or new.   Pfeffer and Sutton identify three simple questions that can assist in avoiding negative outcomes arising from inappropriately repeating past strategies and innovations.
  • Is the practice that I am trying to import into a new setting, say lesson planning, peer tutoring, e-learning, pastoral arrangements or information and guidance arrangements directly linked with previous success.   Was the success which is now being attempted to be replicated despite of the innovation which was adopted rather than because of it?
  • Is the new - department, school, college or other setting - so similar to past situations that what worked in the past will work in the new setting?  For example, getting the balance right between delegation and control may be different between an outstanding school/college and one that is on a journey of improvement.
  • Why was past practice  say graded lesson observations - effective, what was the mechanism or theory which explains the impact of your previous actions.  If you don't know what was the theory or mechanism that underpinned past success then it will be difficult to know what will work this time.  (Adapted from Pfeffer and Sutton p9)
Following deeply held yet unexamined ideologies 
     The third basis for flawed decisions-making is deeply held beliefs or biases which may cause school and college leaders to adopt particular practices, even though there maybe little or no evidence to support the introduction of the proposed practice or innovation. Pfeffer and Sutton suggest a final three questions to address this issue.
  • Is my preference for a particularly leadership style or practice because it fits in with my belief about people -  for example, do I support the introduction of PRP for teachers because of my belief of the motivational benefits of higher salaries and payment for results?
  • Do I require the same level of evidence or supporting data, if the issue at hand is something that I am already convinced what the answer is?
  • Am I  allowing my preferences to reduce my willingness to obtain and examine data and evidence which may be relevant to the decisions.  Do I tend to read the educational bloggers/tweeters who agree with me or do I search out views that are in contrast to my own.  Am I willing to look at research which challenges my preconceptions for example Burgess et al (2013) and the small but positive effect of league tables.(Adapted from Pfeffer and Sutton, p 12)
To conclude :
     If an evidence based approach is taken seriously it could potentially change the way in which every school/college leader thinks about educational leadership and management, and subsequently change the way the behave as school/college leader.  As Pfeffer and Sutton state:
     First and foremost, (evidence based management) it is way of seeing the world and thinking about the craft of management.  Evidence-based management proceeds from the premise that using better, deeper logic and employing facts to the extent possible permits leaders to do their jobs better. Evidence based management is based on the belief that facing the hard facts about what works and what doesn't, understanding dangerous half-truths that constitute as much conventional wisdom about management, and rejecting the total nonsense that too often passes for sound advice will help organizations perform better (p13)

Burgess, S., Wilson, D., and Worth, J. (2013) A natural experiment in school accountability the impact of school performance information on pupil progress, Journal of Public Economics, vol 106, pp 57-67
Pfeffer, J and Sutton, R., (2006) Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management. Harvard Business Review Press.

Monday, 1 December 2014

What Works? Evidence for decision-makers - it's not quite that simple.

This week saw the publication of the What Works Network's What Works? Evidence for Decision-Makers report which in the context of education identified a number of practices, for example, peer tutoring and small-group tutoring which appear to work.  On the other hand, the report also identified a number of practices which appeared not to work, for example, giving pupils financial incentives to pass GCSE or pupils repeating a year.  However, whenever such  reports come out which identify supposedly what works there can be tendency to confuse 'evidence' with evidence-based practice.  In particular, there is a danger that published research evidence is used to reduce the exercise of professional judgment by practitioners.  Indeed, one of the great myths associated with evidence-based practice is that it is the same as research determined or driven practice.  As such, it is potentially useful for educationalists interested in evidence-based education to gain a far greater understanding of what is meant by evidence-based practice.
      Barends, Rousseau, & Briner's (2014) recent pamphlet on the basic principles of evidence-based management  define evidence based practice as the making of decisions through the conscientious, explicit and judicious use of the best available evidence from multiple sources by:
  • Asking: translating a practical issue or problem into an answerable question
  • Acquiring: systematically searching for and retrieving the evidence
  • Appraising: critically judging the trustworthiness and relevance of the evidence
  • Aggregating: weighing and pulling together the evidence
  • Applying: incorporating the evidence into the decision-making process
  • Assessing: evaluating the outcome of the decision taken
to increase the likelihood of a favourable outcome (p2)*.
In undertaking this task information and evidence is sought from four sources
  1. Scientific evidence Findings from published scientific research.
  2. Organisational evidence Data, facts and figures gathered from the organisation.
  3. Experiential evidence The professional experience and judgment of practitioners.
  4. Stakeholder evidence The values and concerns of people who may be affected by the decision.
In other words, evidence based practice involves multiple sources of evidence and the exercise of sound judgement.
     Furthermore, drawing upon Dewey's practical epistemology Biesta argues that the role of research is to provide us with insight as to what worked in the past, rather than what intervention will work in the future.  As such, all that evidence can do is provide with us a framework for more intelligent problem-solving.  In other words, evidence cannot give you the answer on how to proceed in any particular situation, rather it can enhance the processes associated with deliberative problem-solving and decision-making.
     So what are the key messages which emerge from this discussion.  For me at least there would be appear to be four key points

  • Research evidence is not the only fruit - in other words when engaging in evidence based-practice research evidence is just one of multiple sources of evidence.
  • Even where there is good research evidence, that does not replace the role of judgment in making decisions about how to proceed.
  • All that research evidence can do is tell you what worked in the past, it can't tell you what will work in your setting or in the same setting at some time in the future.
  • Research evidence provides a starting point for intelligent problem-solving
Barends, E., Rousseau, D. M., & Briner, R. B. 2014. Evidence-Based Management : The Basic Principles. Centre for Evidence Based Management (Ed.). Amsterdam.
Biesta, G. 2007 Why 'What Works' Won't Work : Evidence-based practice and the democratic deficit in educational research, Educational Theory, 57 (1) 21-22

Tuesday, 25 November 2014

How past experience can get in the way of good teaching and learning.

This post has been inspired by Oliver Burkeman's recent column  ' Don't get caught in the monkey trap'.  In this column, Burkeman refers to the Einstellung effect, and how previous experience can lead us to being 'blind' to new and better ways of doing things.  Bilalic, McLeod and Gobet demonstrate how experts once they have found a way to proceed, may find this prevents a better option from being identified.  This process may lead to a wide-range of cognitive biases - both in experts and the less expert - such as confirmation bias whereby there is a tendency to ignore evidence which does not fit with current preconceptions.

So what are the implications of the Einstellung effect for teachers and those who lead educational institutions.  Well for me there would be appear to be several.
  • Experienced teaching professionals should constantly challenge themselves by acknowledging that previous experience does not always provide the best answer to any given situation.
  • Leaders within schools and colleges, who are likely to be some of the most experienced colleagues within a school and college, should lead by example and show the necessity for CPD which specifically challenge previously held assumptions and experiences.
  • Recruitment processes for new positions may overly value experienced candidates over less experience candidates.
  • The perceptions and viewpoints of less experienced members of staff are as valuable as more senior staffroom 'sages'.
  • Tasks or project groups should include a wide range of participants, with differing types of experiences.
  • Colleagues should constantly seek  to be placed in new, uncomfortable though 'safe' situations to allow for the development of a greater repertoire of teaching, learning and management  strategies.
And the implications for educational bloggers and tweeters?  Many of the posts/tweets within the blogosphere/twittersphere argue for experienced teaching professionals to be trusted to be able their exercise their judgment.  Unfortunately, the Einstellung effect would suggest that 'trusting' in experience may not necessarily bring about the best outcomes for pupils and learners.  However, and let's make this absolutely clear,  I'm not arguing to reduce levels of trust, but rather that an essential element of gaining, using and maintaining such trust, is to openly acknowledge the weaknesses of our expertise, and demonstrate how we are not only constantly challenging others, but just as importantly constantly challenging ourselves.

Tuesday, 18 November 2014

Further Cuts in the FE budget and Creating a Learning Society - Does it make any economic sense?

This week has seen concern expressed by the Association of Colleges about the FE sector having to find further budget cuts, with reports that another £48 billion of cuts in public expenditure being planned.  In particular, there are fears that the  FE sector will once again endure more than its' fair share of the cuts, given the past ring-fencing of the schools budget.   However, in this post I will put the forward that a demand for  'fairness' is not the way to win this argument, instead rather 'hard-nosed' economics is what is required.  In making this argument I will drawn upon the implications for the further education sector of Joseph Stiglitz and Bruce Greenwald's new book Creating a Learning Society : A New Approach to Growth.
         In their opening chapter Stiglitz and Greenwald state:
    .... that most of the increases in the standing of living, are as Solow suggests. a result in increases in productivity - learning how to do things better.  And if it is true that productivity is the result of learning and that productivity increases (learning) are endogenous, than a focal point of policy ought to be increasing learning within the economy : that is, increasing the ability and incentives to learn, and learning how to learn, and then closing the knowledge gaps that separate the most productive firms in the economy from the rest.  Therefore creating a learning society should be one of the major objectives of economic policy.  If a learning society is created, a more productive society will emerge and standards of living will increase. (p5).

         Of particular relevance to further education, Stiglitz and Greenwald argue that learning from one firm or sub-sector firm spills over to other firms and sub-sectors within the industrial sector, through labour mobility and changes in the use of technology.  However, knowledge is a public good - with the cost of others benefitting from such knowledge having a zero marginal cost,  Furthermore, it is challenging if not impossible to prevent other firms or individuals gaining from the learning and knowledge which has taken place within a firm.  Stiglitz and Greenwald subsequently demonstrate that markets are neither efficient in the production and dissemination of learning and knowledge.  As a result, if there are spillovers from one firm, sub-sector or industry to another, then there will be underproduction of knowledge and learning.  This occurs because when firms are making decisions about investment in learning capacity and capabilities they will not take into account the benefits accrued by other firms or individuals.

         Furthermore, Stiglitz and Greenwald  recognise the critical importance of cognitive frames in that :Individuals and firms have to adopt a cognitive frame, a mindset which is conducive to learning. That entails the belief change is possible and important - and can be shaped and advance. by deliberative activities. Stiglitz and Greenwald argue such cognitives frames have implications for a number of issues including: whether we learn; being 'stuck' with particular cognitive frames; and, what we learn.

         So what does this mean for the FE sector? My initial thoughts on the matter, would suggest the following. First, from policymakers there needs to be a recognition of the essential role that further education plays in the long-term health of the economy.  Recent reductions in funding for adult education are not consistent with creating the conditions by which a developed economy grows.  Second, an education and training market led by employers is likely to lead to an under-investment in knowledge and learning.   Third, FE colleges have a critical role to play in the local economy in maximising the externalities created by investment in education.  In particular, colleges are incredibly well placed to leverage the spillovers that are generated in local economies through.  These spillovers can be enhanced by ensuring the use of knowledgeable and skilled visiting lecturers who are technical specialists their field or it could be as simple as individual learners sharing their experiences with others.  In doing so, colleges can act as an essential hub of economic growth by making connections between firms and ensuring individuals have access to appropriate education and training.   Finally, individual lecturers have a responsibility to focus on developing learner mindsets which increase learners capacity as life-long learners.

         I hope that I have begun to demonstrate that the compelling argument for no future cuts in expenditure is economic, in that colleges are hub for knowledge and learning bringing about future economic growth.  It's absolutely right to argue for fairness of treatment in relation to schools and universities though in the current political climate we need to be making a compelling and demonstrable economic case for additional investment in further education and training.  I hope this post makes some small contribution to making that case.

    Tuesday, 11 November 2014

    'What Makes Great Teaching' - is it time for a critical re-appraisal

    Over the last two weeks we have seen both the publication of the Sutton Trust's report on "What Makes Great Teaching" and the Sutton Trust/ Gates Foundation international teaching summit held in Washington DC.   Indeed, there have been a number of blog posts from attendees reporting on the summit, for example John Tomsett and Tom Sherrington,  with Mr O'Callaghan providing an invaluable summary of relevant news and blogposts. Given this increased focus and attention on evidence-based education it seems reasonable to pause and reflect, as the notion of evidence-based education is not an uncontested concept.  As one of the key aspects of evidence-based practice is to challenge one's own cognitive biases and look at competing arguments and perspectives, it seemed sensible to me, a self-confessed advocate of evidence-based education, to consider some alternative viewpoints.  In doing so, I will not be arguing for, or against any particular approach to teaching or pedagogy.  Instead, I will be trying to gain a greater understanding of some of the underpinning assumptions  of  'What Makes Great Teaching' so hopefully it can become an even more useful intellectual stimulus for educational research and practice.   Drawing upon the work of Biesta (2007) the rest of this post will cover the following ground:
    • general objections to evidenced based education;
    • the non-causal nature of education;
    • educational research and intelligent problem-solving;
    • distinguishing between the cultural and technical purposes of education;
    • the implications of the above for the dissemination, implementation and follow-up to 'What Makes Great Teaching'.
        Biesta states that opponents of evidence-based education have a raised a number of doubts about the appropriateness of an evidence based approach to educational practice including: one, evidence based education being part of the new public management agenda; two, evidence-based medicine being an inappropriate template for the development of evidence-based education; three, issues around the nature of evidence within the social sciences.   .  
         Biesta goes onto argue that evidence-based education implies a particular model of professional action i.e an effective intervention can bring about the required outcome. Biesta argues that education cannot be understood in terms of cause and effect due to the non-causal nature of educational practice, with the 'why, what and the how' of education being intimately bound together.  As such, an essential component of being an educational practitioner is the exercise of judgment in what is 'educationally desirable'(p21)
        Secondly, drawing upon Dewey's practical epistemology Biesta argues that the role of research is to provide us with insight as to what worked in the past, rather than what intervention will work in the future.  As such, all that evidence can do is provide with us a framework for more intelligent problem-solving.  In other words, evidence cannot give you the answer on how to proceed in any particular situation, rather it can enhance the processes associated with deliberative problem-solving and decision-making.
         Thirdly, Biesta argues that evidence-based education has too great a focus on the technical processes of education - what works or does not work - rather than performing a more nuanced and profound cultural function.  Biesta argues that an open and democratic society should be informed by debate and discussion on the 'aims and ends' of our educational endeavours, whereas evidence-based education is currently too focused on the technical means.  
         So what are the implications of Biesta's stance for our reading of 'What Makes Great Teaching'. One could certainly read the report as sitting very clearly within a technocratic model of education, with the three simple questions which inform the report focusing on the what and how of education, with no consideration of 'why'.  As such, it could be argued that to maximise the benefit 'What Makes Great Teaching' it requires a broader and far more wide ranging discussion into the very purposes of education. Leaders of staff within schools and college's when discussing the implication of 'What Makes Great Teaching'  should do so in a way which facilitate the democratic discussion of the purposes and ends of both education and the school and college.  Reflecting upon how these technical recommendations can be implemented within the school/college in a way which is empowering for both staff and students. 
         A key assumption of 'What Makes Great Teaching' is that educational reality involves clear relationships between cause and effect, with particular approaches to teaching being more effective in bringing about the desired student outcomes.  I'm not going to argue either for or against objectivist or subjectivist stances to epistemology and ontology, others are far better able to do so.  That said, if we are to be evidence-based practitioners it is essential that we fully explore the underpinning assumptions of research, in this case assumptions of cause and effect, with this especially being the case if the research corresponds with existing perspectives.
        As for the practical implications of 'What Makes Great Teaching'  for educational practice, even though the report identifies the components of great teaching, the frameworks which can 'capture it ' and what could be done to develop improved professional learning this does not, or should not provide a set or recipes or check-lists that will guarantee better teacher learning and student/pupil outcomes.   This is certainly the case given the fast-changing world and the implications for pedagogy  and learning of ubiquitous mobile technology (See Benedict Evans' recent presentation).  'What Makes Great Teaching" tells us what may have worked in the past, it may tell us what could work in the future, though not definitively. As such 'What Makes Great Teaching' provides is a valuable resource for practitioners - be it headteachers or teachers - to engage in reflection which will be increase their capacity and capability to find 'solutions' which are fit for their particular settings.
        Finally, if reading 'What Makes Great Teaching' provides a prompt to read other reports and articles which have a different perspective on evidence-based education, so that as a result one has a greater breadth of understanding of the challenges of evidence-based education and the cultural implications, then the report my have provided both a technical and cultural stimulus to ensuring pupils and students experience even greater teaching.

    Tuesday, 4 November 2014

    Sutton Trust Report and What Makes Great Teaching - Implications for the Further Education Sector

    Last week saw the publication of the Sutton Trust's report  What Makes Great Teaching by Robert Coe, Cesare Aloisi, Steve Higgins and Lee Elliot Major and which created quite a furore.  The report warned that many day to day classroom practices used by teachers can be both detrimental to learning and are not substantiated by research, for example, using students' preferred learning styles as a means to determine how to present information. However, the main body of report focuses on three simple questions : What makes ’great teaching’? What kinds of frameworks or tools could help us to capture it? How could this promote better learning?  In this post, I will focus on 'what makes great teaching' and consider the implications for leadership in the further education sector.  
         Coe et al define great teaching as:
     ... that which leads to improved student progress.  We define effective teaching as that which leads to improved student achievement using outcomes that matter to their future success. Defining effective teaching is not easy. The research keeps coming back to this critical point: student progress is the yardstick by which teacher quality should be assessed. Ultimately, for a judgement about whether teaching is effective, to be seen as trustworthy, it must be checked against the progress being made by students. (p2)
    Coe et al subsequently go onto identify the six components of great teaching as:
    • (Pedagogical) content knowledge (Strong evidence of impact on student outcomes)
    • Quality of instruction (Strong evidence of impact on student outcomes)
    • Classroom climate (Moderate evidence of impact on student outcomes)
    • Classroom management (Moderate evidence of impact on student outcomes)
    • Teacher beliefs (Some evidence of impact on student outcomes)
    • Professional behaviours (Some evidence of impact on student outcomes)
         So what does this mean for the leaders of further education colleges, initial reflection on the impact of content knowledge and quality of instruction would suggest the following:
    • Leaders need to ensure that the ‘Coe’ report is disseminated and discussed within their own colleges and staffrooms.  In doing so, leaders should create opportunities for individual and collective reflection in a way which encourages colleagues to look at the evidence on effective pedagogy, and in a way may challenge their existing pre-conceptions and practices.
    • As Samuel Arbesman argues a large amount of what we know will be obsolete within a few years.   This will require college’s to ensure there are substantive and well-resourced programmes of CPD which provide opportunities for staff to update their subject knowledge.  This will require more than just the odd-bit of work-shadowing done at odd times of year, but rather bespoke and structured programmes of subject knowledge updating which have a focus on improving student outcomes.
    • The need to ensure the further education sector is an attractive employment proposition for those individuals with high levels of content knowledge, attracting individuals to participate in further education, rather than as so often is the case just ‘falling’ into FE.
    • Colleges need to invest in developing the ‘craft’ of teaching and having a CPD programme which focuses on teaching and learning, rather than the latest requirement for the purposes of compliance or change in assessment regime.  That is not to say the latter are important, they are, but they are necessary but not sufficient for effective teaching and learning.
    • As Coe at al rightly identify lesson observation schemes need to have a formative focus – and which are not 'one-off' graded lessons used for the purposes of performance, review and appraisal.  Lesson observations need to be crafted as processes which support both individual and collective professional development.  Imaginative approaches need to be adopted, such as unseen observations, lesson study and videoed observations as advocated in Matt O'Leary's recent book - Classroom Observation : A guide to effective observation of teaching and learning.
    • Coe et al al provide compelling evidence that lesson observation schemes are best suited for low-stake developmental purposes, and even in these circumstances will require trained observers trained in the use of a valid protocol.   As most of the high quality lesson observation schemes have been developed in the US, work needs to be undertaken to develop a high quality protocol which is suitable for the need of assessing the effectiveness of vocational pedagogy.
    Finally, as Gert Biesta and Nicholas Burbules argue in their 2003 book Pragmatism and Educational Research - effective education and pedagogy is more than just being technically skilful as teacher, it requires a deep and profound exploration of the why, what and the how of education - and any implementation of this report in colleges needs to be seen within the context of the fundamental purposes of further education.   As you can probably tell I have only just begin to scratch the surface of the implications for the further education sector of Coe et al's report.  Nevertheless, I do believe it is a report which is worth reading especially given the current emphasis on teaching and learning within the further education sector. I hope this post helps start this discussion.

    Tuesday, 28 October 2014

    Teaching, learning and assessment and the grade for overall effectiveness - a case of mistaken identity

    This post highlights the flaws of using the aspect grade for teaching, learning and assessment as a limiting factor for the grade awarded for overall effectiveness, and with it the implications for the reliability and validity of inspection outcomes for general further education colleges.
          The OfSTED Handbook for the Inspection of Further Education and Skills http://www.ofsted.gov.uk/resources/handbook-for-inspection-of-further-education-and-skills-september-2012  makes it clear that the aspect grade for  quality of teaching, learning and assessment is a limiting  factor for  the overall grade for effectiveness, for example, to be awarded an outstanding grade for overall effectiveness, the grade for the quality of teaching, learning and assessment must also be outstanding.
         Using a similar approach as undertaken by Waldegrave and Simons (2014) to analyse the relationship between grades awarded for school inspections, the following table summarises the relationship between the different inspection grades awarded during 125 general further education college inspections and which took place between January 2013 and June 2014.
    Aspect grade
    Overall grade for effectiveness
    Agreement with the grade for outcomes for learners
    Agreement with the grade for the quality of teaching, learning and assessment
    Agreement with grade for the effectiveness of  leadership and management

    *only 4 colleges were awarded a grade 4, which has an impact on the associated correlations.

         It can be clearly seen from the data that the teaching, learning and assessment aspect grade corresponds most strongly with the overall grade for effectiveness, which is not surprising given the guidance in the handbook.   Out of 125 GFE college inspections undertaken in the specified 18 month period, there was only 1 occasion when there was a difference between the two grades, and on this occasion the overall grade for effectiveness was lower than the grade for teaching, learning and assessment. 
         However, the direct relationship between the grade for overall effectiveness and the quality of teaching, learning and assessment is not without its’ problems.  In the further education sector, unlike schools, individual lesson grades are still being used by OfSTED inspectors to summarise judgments about the quality of teaching, learning and assessment within a lesson.    Both Matt O’ Leary and Rob Coe identify the serious challenges with the use of observations in the grading of teaching and learning.   Waldegrave and Simons (2014) cite Coe’s synthesis of a number of research studies, which raises serious questions about the validity and reliability of lesson observation grades.  When considering value added progress made by students and a lesson observation grade (validity) , Coe states that in the best case there will be only 49% agreement between the two grades and in the worst case there will be only 37% agreement.  As for the reliability of grades Coe’s synthesis suggests that in the best case there will be 61% agreement and in the worst care only 45% agreement between two observers.
         As such, it would seem that using the teaching, learning and assessment grade as the driver for the grade for overall effectiveness is not consistent with the current best available evidence, and indicates that systems of accountability within the further educations sector have yet to be fully informed by the evidence-based practice movement.  Accordingly, in order to bring about more effective and more meaningful accountability processes the following are worthy of consideration.
    • the direct relationship between the teaching, learning and assessment aspect grade and the overall grade for effectiveness should be replaced by a holistic judgment.
    • the design of 'inspection regimes' to be subject to open and transparent college effectiveness/college improvement 'state of the art' reviews to ensure processes are likely to generate valid and reliable judgments.
    • as part of the process of self-assessment colleges reduce, if not eradicate,  their over-reliance on lesson observation grade profiles in making judgments about teaching, learning and assessment.

    Coe, R, Lesson Observation: It’s harder than you think, TeachFirst TDT Meeting, 13th January 2014
    O’Leary M (Forthcoming in 2015) ‘Measurement as an obstacle to improvement: moving beyond the limitations of graded lesson observations’, chapter in Gregson, M. & Hillier, Y. (eds) Reflective Teaching in Further, Adult and Vocational Education, London: Bloomsbury.
    O’Leary, M. (2014). Lesson observation in England’s Further Education colleges: why isn’t it working and what needs to change? Paper presented at the Research in Post-Compulsory Education Inaugural International Conference: 11th –13th July 2014, Harris Manchester College, Oxford.
    Waldegrave, H., and Simons, J. (2014). Watching the Watchmen: The future of school inspections in England, Policy Exchange, London

    Tuesday, 21 October 2014

    The Skills Commissioner and learning from others - it's not as easy as you think

    Last Friday saw the publication of the FE Skills Commissioner's latest letter to the sector.  In this letter Dr David Collins addresses the issue of quality improvement and the variations in quality of leadership and management seen across the sector and provides some quite straightforward advice.   

    If a college is having a problem my advice is simple - find someone who is performing well in that area and learn from them. 

    However, things are not quite that simple for as Jeffrey Pfeffer and Robert I Sutton in their 2006 book Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management argue there are three 'dangerous' decision practices which can often cause both organisations and individuals harm and one of which is casual benchmarking.  Pfeffer and Sutton argue there is nothing inherently wrong with trying to learn from others.  However, for this type of benchmarking to be of use the underlying logic of what worked and why it worked needs to be unpacked and understood.   Pfeffer and Sutton identify three questions which must be answered, if learning from others is to be beneficial.  
    • Is the success of a particular college down to the approach which you may seek to copy or is it merely a coincidence? Has a particular leadership style made no difference to student outcomes, even though student outcomes appear to have improved. Do other factors explain the improvement of student outcomes and which are independent of leadership style.
    • Why is a particular approach to, for example, lesson observation linked to performance improvement.   How has this approach led to sustained improvements in the level of teacher performance and subsequent improved outcomes for students.
    • Are there negative unintended consequences of high levels of compliance in well performing colleges.  How are these consequences being managed, and are these consequences and mitigating strategies evident in any benchmarking activity? (Adapted from Pfeffer and Sutton p8)
    Furthermore not only is the FE Commissioner's  advice not as quite as simple as first thought, it may also be wrong.   Rosenzweig (2006) identifies a range of errors of logic or flawed thinking which distort our understanding of company (college) performance and which is implicit within the Skills Commissioner's letter and his 10 Cs.  Rosenzweig identifies the  most common delusion as the Halo Effect, as is when an observer's overall impression of a person, company, brand and product  and in this case college, influences the observer's views and thoughts about that entity's (college)  character or properties. In this case when a college's retention, achievement, success rates and operating surplus improve people (inspectors or significant other stakeholders) may conclude that these arise from a brilliant leadership and a coherent strategy or a strong and a college  culture with high levels of compliance. If and when performance deteriorates - success rates or position in leagues tables fall -  observers conclude it is the result of weak leadership and management (the Principal), and the college was complacent or coasting. On the other hand, the reality may be that there has been little or no substantive change and that the college performance creates a HALO that shapes the way judgements are made about outcomes for learners, teaching and learning and assessment, and leadership and management.

    To conclude, I have argued that learning from others is not quite as simple as going to visit another college.  I have also argued that to learn from other requires awareness of possibly flawed thinking, logic and cognitive biases which means that the wrong things are learnt.  In future posts, I will argue the case for the need for evidence-based educational leadership which is relevant for the further education sector.

    Pfeffer, J and Sutton, R., (2006) Hard Facts Dangerous Half-Truths and Total Nonsense : Profiting from evidence-based management. Harvard Business Review Press.

    Rosenzweig, P., (2006). The Halo Effect … and the eight other business delusions that deceive managers, Free Press, London

    Wednesday, 15 October 2014

    It seemed a good idea at the time BUT we really should have known better!

    This academic year will have seen the introduction of  a wide range innovations in schools and colleges, many of which will be showing the first fruits of success.  On the other hand, there will be innovations which are not working and have little prospect of success and were introduced because the originator(s) had fallen in love with the idea.   These unfortunate failures highlight one of the major challenges of evidence based leadership and management which is to develop the processes which reduce the errors generated by cognitive and information processing limits which make decision-makers prone to biases and which subsequently lead to negative outcomes.

    In this post I will be drawing upon the work of
    Kahneman, Lavallo and Sibony (2011) in how to find dangerous biases before they lead to poor-decision-making.  Developing the skills to to appraise and critically judge the trustworthiness and relevance of multiple sources of evidence is a critical element of evidence based practice.   Kahneman et al identify a number of specific biases, questions and actions which could be used to improve the rigor of decision-making.  These have been summarised and adapted and  in the following table.

    Avoiding Biases and Making Better Decisions - A Checklist - Summarised and adapted from Kahneman, Lavallo and Sibony (2011)

    Preliminary questions
    Check/Confirm for

    Self-interested biasesIs there any reason to suspect that the team of individuals making the recommendation are making errors motivated by self-interest?Review the proposal with care
    Affect heuristicHas the team fallen in love with its’ proposals?Apply the check-list
    GroupthinkWere there dissenting opinions, were these opinions fully explored?Discretely obtain dissenting views
    Challenge questions
    Saliency biasCould the diagnosis be overly influenced by an analogy to a memorable success?Are there other analogies?
    How similar are this and other analogies to the current situation?
    Confirmation biasAre credible alternatives included with the recommendation?Request additional options be provided
    Availability biasIf this decision was to be made again in a year’s time, what information would you want and can you get more of it now?Develop checklists of available information for different types decisions
    Anchoring biasDo you know where the numbers came from – are there unsubstantiated numbers – have they been extrapolated from historical data?Check the figures against other models, are there alternative benchmarks which can be used for analysis.
    Halo effectIs the team assuming that a person, organisation or innovation  which is successful in one area will be just as successful in another?Eliminate false inferences- seek alternative examples
    Ask about the proposal
    Overconfidence, planning fallacy, optimistic biases, competition neglectIs the base case overly optimistic?Have outside views been taken into account?
    Check for disaster neglectIs the worst case bad enough?Conduct a pre-mortem to work out what could go wrong
    Check for loss aversionsIs the recommending team overly cautious?Realign incentives to share responsibility for the risk or remove the risk.

    How could this check-list be used to improve decision-making within educational settings?

    • Ensuring the check-list is applied before the action is taken which commits the school or college to the action being proposed.
    • Ensuring the decision-check-list are applied by a member or members of staff who are both sufficiently senior within the school/college, whilst at the same time is not part of the group making the recommendation.  Separation from recommenders and decision-makes is desirable and which has implications for governance and leadership.
    • Ensuring the check-list is used in whole and not in parts and is not 'cherry-picked' to legitimate a decision.
    If colleagues are able to adopt the above process then it is likely to increase the chances of success in the use of evidence based practice.   As we approach the end of the first half-term  of the academic year I wonder how many schools, colleges, teachers, lecturers and most importantly students could have avoided the negative impacts of poorly thought out decisions, if the above checklist had been applied early on as part of the decision-making process.


    Kahneman, D., Lovallo, D and Sibony, O.  (2011) Before you make that big decision ... Harvard Business Review, June 2011