Performance related pay
Inconclusive evidence provides cause for concern
A tool kit is promised
I make no apologies for returning to this subject so often -not least because schools are expected ‘to develop systematic and transparent arrangements for both appraisal and pay.’
Performance pay or performance-related pay is where there is an attempt to link a teacher’s wages or bonus payments directly to their performance in the classroom.
A distinction can be drawn between progression related awards where performance leads to a higher salary and payment by results where teachers get a bonus for their higher test scores. In the USA it is sometimes referred to as ‘merit pay’, and, due to federal government incentives through the Teacher Incentive Fund (TIF), has been increasingly used by state governments. We have highlighted the fact that there is no consensus among academics as to how you accurately, reliably and fairly measure value added. Value-added methods refer to efforts to estimate the relative contributions of specific teachers, schools, or programmes to pupil test performance. But, and this is important, there is no one dominant Value Added Measurement. Indeed no single value-added approach (or any test-based indicator, for that matter) addresses all the challenges to identifying effective or ineffective schools or teachers. Each model has shortcomings, and there is no consensus on the best approaches. Indeed, little work has been done on synthesizing the best aspects of each approach. If there is a consensus among those who are closely familiar with these Value Added measurement schemes it is that they do not provide, and are unlikely to provide any time soon,, a valid basis for decision-making about the quality of teaching, such as those involved in performance-related pay. Secondly, the 30 or so states that have introduced some form of evaluation system linked to merit pay have produced a variety of systems and there is no settled consensus on which one is the fairest, the most reliable or the most cost effective. The most challenging problem for them remains how to measure student growth, or learning, for the vast majority of teachers who don’t teach in tested subjects or grades.
The best that can really be said about it is that the evidence is ‘ not conclusive’. Our own Education Endowment Foundation (EEF) has looked at the evidence(that’s its role) and it concludes:
‘ Payment by results has been tried on a number of occasions, however the evidence of impact on student learning does not support the approach. The UK evidence offers a cautious endorsement of approaches which seek to reward teachers in order to benefit disadvantaged students by recognising teachers’ professional skills and expertise. However, approaches which simply assume that incentives will make teachers work harder do not appear to be well supported.”
It continues ‘As the evaluation of a number of merit pay schemes in the USA have been unable to find a clear link with student learning outcomes, it would not seem like a good investment without further study. Whilst teacher quality is an important aspect of education, it may be more effective to recruit and retain effective teachers, rather than look for improvement based on financial reward.’
If you want an excellent summary of Value Added measures, the different types of evaluation, performance and reward systems, the issues and the evidence, take a close look at this Australian Report:
Research on Performance Pay for Teachers Lawrence Ingvarson Elizabeth Kleinhenz
Jenny Wilkinson Teaching and Leadership Research Program, Australian Council for Educational Research (2007)
The STRB has recommended that the DFE develops guidance or a toolkit to help schools develop systematic and transparent local approaches to pay progression. Subject to consultees’ views,the Government propose to accept this recommendation. (thank goodness for that!)
A report for the US Department of Education ‘ Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains’ (July 2011) found that there is ‘evidence that value-added estimates for teacher-level analyses are subject to a considerable degree of random error when based on the amount of data that are typically used in practice for estimation.’ It also said, and this is crucial, that evidence suggests ‘that more than 90 percent of the variation in student gain scores is due to the variation in student-level factors that are not under control of the teacher’
What about the 360 degree method of assessment? A 360° appraisal uses feedback from a variety of people; in a school this would take the form of self-reflection, peer, line manager, student and even parental feedback.