EVIDENCE LED POLICY AND PRACTICE?

EVIDENCE BASED POLICY AND PRACTICE

Good idea-what about applying it?

Comment

At the level of rhetoric at least, the evidence-based policy movement is about challenging old strategies for policy formation, based, as they were, either on ideology or the preferences of elites.  It purports to stand outside the political process, and the subjective whims of officials and politicians giving policy advice based on rational evidence rather than ideology or   informed by sectional and partisan interests.  This aspiration was well captured by Soulesbury (2001) of the ESRC UK Centre for Evidence-based Policy and Practice, when he writes … ‘..there is something new in the air which gives both a fresh urgency and a new twist to the issues around evidence-based policy. To my mind the key factor is the shift in the nature of politics; the retreat from ideology, the dissolution of class-based party politics, the empowerment of consumers ‘

The movement for evidence-based practice, flows from this – prompting enhanced use of research evidence in the work of the professions, which probably started in medicine in the early 1990s. It has grown in influence there, and spread too across a number of other fields, including education. There is a growing acceptance generally that evidence should inform policy but also practice, so that what teachers do in the classroom and how schools are run  is  rooted in sound evidence of what works. That is certainly supposed to be the guiding principle.  The challenge of course is to change the mindsets of politicians in power who have access to public funding and who have their own beliefs, preferences and, sometimes, prejudices to assuage.  The mantra ‘evidence led policy’ is now firmly embedded in the language of politics but it would be foolish to think that all policy, even now, is based on sound evidence.

Remember David Blunkett when he was Education Secretary. Not a day went by without some new announcement from the then DFES about some new initiative or other normally with a price tag attached. Rather too many of these initiatives were quietly dropped or ran into the sand, as they were not delivering results ,dropping into oblivion  in stark contrast to the fanfare that announced them in the first place. Were they evidenced based? Some were but rather too many weren’t.  Sometimes the idea behind the policy was flawed. Other times the idea looked  pretty good on paper  but the ‘deliverology’ as Professor Barber put it, was found wanting.  As some policies failed to deliver the hoped for outputs, there was a creeping realization  in Government circles  that some of the policies were ill conceived , and   based on slender,    flawed or no evidence at all..

Think of the wasted opportunity and money.

The Government, to its credit wanted to do good things, to make a difference, to raise performance and accountability, to improve outputs in schools, putting Education at the the top of the political agenda. Big investment followed. Some of the policies did work-think of the launch of the Literacy and Numercay strategies and its early success.  But in seeking to create the impression it was doing much to address   the big education challenges   its blizzard of initiatives resulted in a huge bureaucratic burden on schools and authorities, wasting hundreds of millions of pounds of taxpayers’ money on policies that simply didn’t  work, leaving the problems the money  was supposed to tackle as just that — problems.

One of the Government flagship ‘big ticket initiatives’ was Sure Start. The ultimate goal of Sure Starts local programmes (SSLPs) is to enhance the life chances for children less than five years of age growing up in disadvantaged neighbourhoods. In March 2006, 800 centres were open. By 30 September 2009, 3,109 centres were open and providing services, including 1,706 centres that were providing the full range of services. The Department’s funding for Sure Start increased from £473 million in 2005-06 to £885 million in 2008-09.

It was inspired by the Early Head Start programme in the United States. So far, so good. This was a scheme already up and running, albeit abroad. Indeed it looks to be a good idea with the best of motives, providing joined up support targeted at young disadvantaged mothers and their children bringing welfare, developmental  and educational advantages .

But as Anthony Browne, an adviser to the Mayor of London, pointed out in the Sunday Times  a couple of  weeks ago ,  in the States, Head Start  cost $21,000 per child, but brought benefits of only $4,700 per child. It is not that the money had no positive effect, but spending it on other schemes would probably have had more impact. So there was a warning for us in the evidence from the US.

So it shouldn’t have been such a big surprise that early Sure Start evaluations showed few positive results. Indeed in launching the initiative policy makers and those tasked with evaluating the scheme covered themselves by warning that it would take some time to show results. With a high level of variation in the way the initiative is delivered locally, it  would always have been very difficult to examine and compare centres’ cost effectiveness   The  Audit Commission, in 2006, found that Sure Start Children’s Centres, though valued by most of the families who use them,  also  noted,  rather significantly,  that much more still needed  to be done to reach and support  those in the most excluded groups. (ie the priority targets).

In layman’s terms this  translated to the  reality on the ground  that  relatively well- off mothers  were making the best use of it ,while poorer mothers ,the key target group , were not feeling  the full benefits .

Crucially the NAO also found that the costs of centres, and of activities in centres, vary widely, and that the local authorities and centres that the NAO visited “ needed to understand their costs better and assess whether they were using their funds cost-effectively”. The NAO has always struggled to make a link between the costs and service quality of the various centres. So, by 2007, the scheme had been running since 1998, at  a  cost  to taxpayers of  more than £21 billion and yet we find   the NAO was still  rather incredibly  asking authorities to work out whether their local delivery was cost effective. And officials are supposed to cherish evidence based policy?

What was clear was that Sure Start , despite  shed loads  of investment,  had failed to improve development levels of children entering primary school .A six-year study of 35,000 children by academics at Durham University found that children’s development and skills as they enter primary school  in 2006 were  no different than they were in 2000.

The National Audit Office, in evidence submitted to the Select Committee recently (December 2009), found that despite extra funding intended to help the Sure Start centres reach out to the neediest parents and children,  (granted after the  2006 report) in practice  a “low level” of such work was taking place, particularly  in the most disadvantaged 30% of communities,  with staff spending  just 38 hours a week on outreach work. Remember these are the people the billions of pounds investment are supposed to be targeting as a priority. And guess what- the NAO again found, as it had in 2006, that many of the centres it surveyed could not provide basic data on their expenditure and work, making it hard for researchers to evaluate the scheme’s value for money. In short, not much has changed in the last four years.

Clearly, although there have been improvements and efficiencies and in paricular in  the way that  some centres evaluate their own performance  there were  and still are , failings in this massive investment   . Failings that  should have been picked up much ,much earlier  and  indeed acted on. And the NAO still struggles to determine whether many centres are cost-effective. How many more millions will be spent before they get it right?

So the Government while reciting the mantra ‘evidence-led policy’ has worrying blind spots.

The Early Years Foundation Curriculum also springs to mind, where the Government has simply ignored evidence from some of its own advisers as well as the international  community. It has selected evidence that backs its particular approach and chosen to ignore much evidence that challenges its policy.

On the Home Education front it has little data to go on and failed to properly consult stakeholders before announcing a radical change in HE policy, now in a Commons Bill.

It also, to its shame, ignored evidence from a range of expert studies  in  the independent Nuffield Primary Review. Between October 2007 and February 2009 the Review had  published  31 interim reports examining  matters as diverse as childhood, parenting, learning, teaching, testing, educational standards, the curriculum, school organisation, teacher training and the impact of national policy. The Government issued a blanket dismissal of the Nuffield evidence on the grounds that it was  ‘out of date,’ which astonished the Academic community, as this  was so  patently untrue. The Nuffield Review also had a much broader remit than the Governments own Primary Review, important though that was. Both Reviews included important evidence but the Government gave the impression that it could  not  afford  to give  credibility to the Nuffield Review because it had not been commissioned by the Government  (dangerous thing independence) and many of its papers, of course ,  though by no means all ,challenged  the evidence behind specific  government policy. One of its key observations (presumably out of date?) was ‘The Review finds Englands primary schools under intense pressure, but in good heart and in general doing a good job’

The Government will have to get more from less in the coming years, as funding for public services is reduced.  Programmes should be informed by evidence, piloted wherever possible on a small scale, the pilots evaluated, then rolled out if the evaluation is positive. They should then be monitored closely, with interim assessments to allow for fine tuning. It is also clear that Up- scaling successful pilots does not necessarily guarantee success, so all projects have to be very carefully and continuously monitored. And politicians from time to time should have the moral courage to accept failures and the blame that goes with it. It is possible to make the wrong call when evidence is presented to you or indeed that evidence might be flawed. Progress after all depends on taking risks.  Anthony Browne found that even those projects that are evidence-based often aren’t implemented effectively and if they are implemented effectively it is usually locally rather than nationally.

Browne comes up with an intriguing idea. We should set up a national institute of policy evaluation, answerable to parliament, which would analyse and evaluate the costs and benefits of each policy and guide government on where to spend its money to have the desired outcome. He looked to the Washington State Institute for Public Policy as an example of how this might work. The institute is non-partisan and rigorously and independently establishes the effectiveness of policies, setting out the pros and cons of each, in terms of the upfront cost and the savings made from reducing unemployment, crime and other social ills. On the other hand we could reform and restructure the National Audit Office to ensure that it is less reactive and less focused on presenting historical accounts of failure  and instead fulfills a  more proactive role, similar  to the Washington State body  preempting failure.  Food for thought.

http://www.wsipp.wa.gov/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s