Education is about developing an  Investigative mind -Professor Scraton


 Lessons from the Hillsborough Tragedy

Professor Phil Scraton’s most important message at the recent SSAT National Conference was “ we have to see education as investigative. It is not just a curriculum that we receive and impart.  Its more about how we engage with issues of our time. “

The very act of education is questioning, he said.  Education  is not just  the top down  imparting   of knowledge. The tightening of  the curriculum means you have, in effect,  to de-school students at university. And  there is a deinstitutionalising knowledge process.  Knowledge is currently passed down ,not upwards but we need to create an alternative  view from below . Students need  to  be taught to think for themselves and create their own version of events , thinking  outside the box.  Too frequently they come to him and ask him what they  need to know  and to  ask what  they need to write.  So, we all have a part to play in developing an inquiring mind.

His experience of uncovering the false narrative ,surrounding the Hillsbrough Stadium  tragedy,   systematically spun by the police, lawyers  and authorities and  supported by the media, shows how important an investigative mind is and how important it is that students are taught what good research looks like and how to navigate through evidence.  Its in not just the authorities who should write history. Pay attention to dissenting accounts and alternative views about events   .The Police dishonestly sought to blame allegedly drunk Liverpool fans for the stadium crush, when it was, in fact, their  operational mistakes , aswell as of   those running the stadium, along with the poor reaction by emergency services ,that contributed to the disaster .  In 2016, new inquests into the disaster found the fans were unlawfully rather than accidentally  killed , which had been the initial verdict . The FA Cup semi-final between Liverpool and Nottingham Forrest, held at Sheffield Wednesday’s stadium, was stopped after six minutes following a crush on the terraces. At the original inquests in 1991, the deaths were ruled accidental but those verdicts were quashed following the 2012 Hillsborough Independent Panel (HIP) report, and new hearings were then  ordered.

In  Scratons  book ‘ Hillsborough: The Truth’ he  revealed that  that the South Yorkshire Police, together with their solicitors, had  systematically reviewed and doctored  individual  police  officers statements in order to give a false account of the disaster to exonerate the police and cover up their failures.  Statements were identical even including the same spelling mistakes. There was almost   total corruption of the evidence,  This was  on a biblical scale over many years.

A jury ultimately  found that all 96 had been unlawfully killed, through the 25 findings delivered against the authorities – particularly the police leading to the exoneration of the fans. Alcohol ,it transpired, played   no part in the tragedy.  Scraton reflected on what C Wright Mills said that  neither the life of an individual nor the history of a society can be understood without understanding both .Our  personal situation is linked to the forces of history and the society we  live in. There are alternative accounts and not just the authorities accounts, to big events and we should pay more attention to researching these in order to get closer to  the truth. Knowledge and truth is not simply dispensed top down.

Scratons triumph is that through single minded resilience and despite numerous setbacks, over many years,   he not only helped  uncover  the truth and righted a wrong and fundamental injustice but he gave a voice to the victims and their relatives, empowering them,   rewriting the truth from the bottom up. A great sadness is that some relatives  of the victims  did not live long enough to  see the results of his efforts.

He also made a compelling argument that a vital outcome for education is to develop an inquiring mind   in pursuit of the truth. His presentation at the SSAT Annual conference received a richly deserved standing ovation.


Critical Thinking-Can it be taught?

Not without subject knowledge 

An OECD report (see link below) says that one good example of a compound skill that relies heavily on both cognitive and personality components is critical thinking. It represents an ability to reflect on information interpret it in a new context and find solutions to novel problems based on existing knowledge. It encompasses cognitive capacities to use the rules of logic and cost benefit analysis, think strategically, and apply rules to new situations to solve problems. However, critical thinking also incorporates aspects of what it labels the Big Five dimension of openness to experience, such as independence  (autonomy) and unconventionality, which represent the driving factors behind the use of cognitive skills for purposes of critical inquiry.’

It continues ‘There is a consensus that critical thinking is an increasingly important skill that should be cultivated in formal education. The ability to act independently and reflect critically upon a given reality is especially important in the fast-changing environment we live in. The role of educational systems is thus increasingly seen as one helping children become lifelong learners, individuals who are autonomous and adaptable, able to critically reflect and understand the evolving reality. A critical stance is also seen as an increasingly relevant skill in a world with more and more misinformation, the unexamined acceptance of which can lead to dangerous consequences for both society and individuals.’

Critical thinking is reckoned by some to include the component skills of analysing arguments, making inferences using inductive or deductive reasoning, judging or evaluating, and making decisions or solving problems.

So critical thinking is good. But,  to think critically you need a sound knowledge base. The more we know,  the more we can think, and think critically. And the more we know, the more we can reflect on what we know and therefore make the connections and linkages that are a prerequisite for critical thinking.

Uncritical thinking, on the other hand , looks a bit  like rote learning , and  simple regurgitation of facts. This is the start of an on -going debate about whether critical thinking can be taught as a standalone subject or not. Is critical thinking a generic skill to be taught? The short answer is that you need sufficient knowledge of a particular domain before you can think critically about it, so it is important that you build your knowledge across the curriculum and in specific domains before you can think critically. You should be encouraged to think critically in every subject you are studying. Your critical thinking is only as good as the mastery of your subject.

The  American education historian Diane Ravitch argued that “we have ignored what matters most. We have neglected to teach them (ie students)  that one cannot think critically without quite a lot of knowledge to think about. Thinking critically involves comparing and contrasting and synthesizing what one has learned. And a great deal of knowledge is necessary before one can begin to reflect on its meaning and look for alternative explanations.”

According to Daniel Willingham, decades of cognitive research suggest that  critical thinking  can’t really be taught. The problem is that people who have sought to teach critical thinking have assumed that is a skill, like riding a bike. Similar to other skills, once you learn it, you can apply it to any situation. Unfortunately, thinking is not that sort of skill he says . The processes of thinking are intertwined with the content of thought ( in other words,  domain knowledge). This helps to explain why students are able to think critically in one subject area, but not in another. The more domain knowledge they have, the more critically they will be able to think about that particular topic or idea. Critical thinking is dependent and contextual. It is not as transferable as we have been led to believe, he claims.

As Willingham says, ‘Critical thinking is not a set of skills that can be deployed at any time, in any context. It is a type of thought that even 3-year-olds can engage in – and even trained scientists can fail in.’

But research from Pearson, says that while background knowledge is absolutely necessary  it is not a sufficient condition for enabling critical thought within a given subject.  It found in its literature review that  ‘Critical thinking involves both cognitive skills and dispositions. These dispositions, which can be seen as attitudes or habits of mind, include open and fair-mindedness, inquisitiveness, flexibility, a propensity to seek reason, a desire to be well informed, and a respect for and willingness to entertain diverse viewpoints.’ So, there are both general- and domain-specific aspects of critical thinking. Critical thinking is more than recalling learned information.  Based on this, Pearson says that  critical thinking assessments should ‘use ill-structured problems that require students to go beyond recalling or restating learned information and also require students to manipulate the information in new or novel contexts.’  It concludes that in theory all people can be taught to think critically. Instructors are urged to provide explicit instruction in critical thinking, to teach how to transfer to new contexts.’ So this seems to reinforce the OECD view that critical thinking requires ‘both cognitive and personality components’  and yes teachers can help (see above)

Critical Thinking: A Literature Review- Research Report, Pearson 2011

Link to Report

OECD Report


The Government is heading rapidly down a cul de sac in its policy to increase selection in the maintained sector. Either it will have to execute a U turn (not unheard of-think, Nicky Morgan) or it will come to a grinding halt , using its scarce resources and haemorrhaging political capital, to prop up a policy that cannot possibly deliver the outcomes it wants-a significant number of new, good school places for ‘ordinary working  families’ and increased social mobility.

The Grammar school model is currently demonstrably failing to help the most disadvantaged pupils and is no engine of social mobility. Justine Greening has accepted as much, and now talks about  the need for a  ‘new model ‘for Grammar schools ,  conceding past failures of Grammars to cater for the less affluent.

Selective schools continue to be dominated by the most affluent. Over half of pupils in selective schools are in families with income above the national median and fewer than one in ten are eligible for the Pupil Premium. Ironically  one  enduring  education success of this and the previous government, has been the Pupil Premium ,which specifically targets the most disadvantaged cohort with extra per capita funding  . Grammars really haven’t played any significant  part  in this success story.

The government has shifted its attention now  to what it calls ordinary working families. Although there is no official definition of an ordinary working family, the government   describes students fitting into the category as those who are not entitled to pupil premium, but who come from families earning “modest” or below median incomes.The Education Policy Institute tells us that Department for Education’s definition of  the OWF group occupies the centre of the income distribution of children in maintained schools.’ Crucially, though , the child of the OWF  currently ‘experiences attainment and progress outcomes that are above average’.

Seeking to change that model by incentivising, or  compelling,  Grammars to take more   pupils from these  ordinary working families  presents a huge new  practical  challenge. . How do you hold schools  to account ? Do you introduce a quota system? Do you dump the eleven plus in favour of another test?  Indeed, can you design a new  tutor -proof test (unlikely)?  Or ,do you lower the pass mark for young people whose families fall below the median income threshold?  The Government risks falling between a rock and a hard place here, alienating both the education establishment and grammar schools.

The three bodies that know most about social mobility and its drivers, are the Social Mobility Commission, the Sutton Trust and Teach First . None of these organisations  though believe that social mobility, remember the top  priority of Justine Greening as Education Secretary, will increase one iota on the back of increased selection. The Sutton Trust believes that Grammars should demonstrate how well they can support  the bottom third of pupils, before they  roll out  increased selection across the system.  Greening struggled on the BBC R4  Today Programme, on 13 April ,to name a single expert or institution that supports her policy (to be fair its not her Policy ,its Nick Timothys of N0 10). She couldn’t,  because there aren’t any. When NO 10 phoned around those whom it could normally rely on to support its education announcements, on the release of its Green Paper on selection, all ducked their heads below the parapet. They had a quick squint at the evidence, saw the prospect of a car crash, and made their excuses .All these organisations are alarmed too at the shift away from targeting the most disadvantaged cohort, and narrowing  the achievement gap,  to the group that  was called those who are just about managing (JAMs) ,( now called  ordinary working families’ (OWFs).

There are  many,  including  key figures who have been  broadly supportive  of the governments education reforms,   who cannot fathom  why the government is pursuing such a high risk policy,   that is not evidence -based, and  has  such little prospect  of  meaningful  educational ,or political, returns. .


Daniel Kahneman is the Eugene Higgins Professor of Psychology Emeritus at Princeton University and Emeritus Professor of Public Affairs at the Woodrow Wilson School of Public and International Affairs. He was awarded the Nobel prize in Economics in 2002 for his pioneering work with fellow Israeli born Amos Tversky on decision-making and uncertainty. Kahneman is also  the author,  ,of the   best selling “Thinking Fast and Slow” (2011). Both Kahneman and Tversky  advanced the discipline of Behavioural Psychology immeasurably, but  the world has been slow to    work out how their insights   might  be used  to improve  decision making  , particularly in public policy.
Their joint research looked at how we humans make decisions, how we make choices (we are supposed to be rational) and how we rate probabilities, along with our ability to predict outcomes . Using research and extensive sampling from behavioural psychologists and economists they found that although quite often we make the right decisions , in other words they are demonstrably in our interests, it can be for the wrong reasons, and indeed we are all susceptible, in a systematic way, to making mistakes because of the way our brains, or minds, work. Our decision-making is subject to a number of biases ‘cues’ and preconceptions, of which we are mostly unaware. These biases often occur as a result of holding onto one’s preferences and beliefs regardless of contrary information. Social pressures, individual motivations, emotions, the way we tap our short term memories and limits on the mind’s ability to process information can all contribute to these biases.
The motivation of these psychologists was that if we know why we make errors of judgment, then we can try and do something about it. Which could have a profound effect on the way we manage our daily lives and in a broader way how our public services are delivered. In short we could improve decision-making,and might be able to spot where human judgment goes wrong. And maybe if we could figure this out, we might be able to close the gap between the expert and algorithms
Kahneman and Tversky demonstrate the ways in which human minds err systematically when forced to make judgments about uncertain situations, and we are all, of course ,daily presented with uncertain situations.
In such an uncertain world we understandably turn to ‘experts’. But, it transpires,  they are also subject to big errors of judgment.

Looking to the medical profession, Professor Paul J Hoffman, in his research as  far back as 1960 (The Paramorphic Representation of Clinical Judgment), looked at the way medical experts, in this case radiologists, diagnosed whether patients had stomach cancer from X- rays. In some  walks of human life there is a lack of sufficient data to build algorithms that might replace the human judge, but medicine is not necessarily one of them . Hoffman wanted to find out how radiologists reached their judgments. He set out to create a model of what these experts were doing when they formed their judgments. So, Hoffman identified the various inputs that experts used to make their decisions. The radiologists said there were seven major signs that they looked for to identify whether a stomach ulcer was cancerous. For example, its size, the shape of its borders, the depth of the crater etc. A simple algorithm was created looking at the seven factors equally weighted.. The researchers then asked the doctors to judge the probability of cancer on a seven point scale from ‘definitely malignant’ to ‘definitely benign’. Unbeknownst to the doctors, they presented  the 92 x rays of different ulcers  ,in random order, with  each x ray presented  twice.

The results were, in a certain sense, terrifying.

Although doctors thought the processes they followed to make their judgments were complex and, of course ,informed by experience this simple model captured them well. Their diagnoses were in fact all over the shop. When presented with duplicates of the same ulcer every doctor contradicted himself and rendered more than one diagnosis. The doctors apparently could not even agree with themselves. A similar experiment with clinical psychologists and psychiatrists asking them to predict whether it was safe to release a patient from a psychiatric hospital found that those with the least training who had just graduated were just as accurate as the fully trained experienced practitioners.
The lesson drawn from the x- ray test was that a simple algorithm had outperformed not merely the group of doctors but it had outperformed even the best individual doctor. So you could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.(remember this was 1960!)

There is now quite a lot of research out there that tells us about how often we make misjudgments, although given good information, on the effectiveness of algorithms (man, versus man made model) and the growing impact, and potential impact of Artificial Intelligence (which is rapidly rising up the political agenda) but we seem to have been remarkably slow at putting this knowledge to good use , particularly in the field of Education and Learning. Hopefully, this will change soon.

It is pretty clear that psychological issues are relevant to policy formulation and implementation  and in the design of  ‘choice ‘architecture . You cannot assume that all individuals, acting for themselves or as economic agents, are completely rational. Most of the time, as Kahneman points out, we can trust intuition, and indeed we do. He draws the distinction between fast thinking and slow thinking, and our lives are mostly run on fast thinking, which normally does us very well. But , there are situations where people would do better by slowing down and where they need more than a little help. And experts judgment can be fatally wrong. Don’t just think of medicine here , think of the financial crash of 2007/8 and other sectors .-one might also look at a few flawed experiments in education policy as education ministers  are as subject to biases (and cherry picking evidence) as the next  person.

. Kahneman says “ We haven’t yet found the right model to look at decision-making under fear, how people react when the world feels dangerous and uncertain.” So the work is on-going but there is infinite scope for making better use of man-made models and exploiting Artificial Intelligence within a secure regulatory framework.

See also, The Undoing Project –A Friendship that Changed the World, Allen- Lane 2017 ;by Michael Lewis (which describes the context of behavioural psychology research, and the relationship between Kahneman and Tversky)


The work of Professors John Hattie, Eric Hanushek and Robert Coe, among others, tells us that good teachers  arent just born, but they  can be made,  with good training and   support and  an openness to new evidence of what works.   .
There are many myths about what effective classroom interventions look like, but more robust research is challenging and correcting  these myths. We know that high quality teaching has a dramatic and positive effect on student progress, whereas poor teaching really does close off life opportunities, for some, indeed far too many..
In the UK teachers have been helped in this particularly by the work of John Hattie (Visible Learning) who has designed an ‘Effects table’ that orders the most effective interventions, but also the Education Endowment Foundation, which has reviewed research and designed a user friendly toolkit that guides teachers through evidence based robust interventions that improve outcomes. If you show teachers what works and they then apply it in their practice, the chances are that it will improve their students outcomes. (It might surprise you but reducing class size is one of the least effective interventions, whereas getting good feedback from students and acting on it. is one of the best, according, at least ,to Hatties work.) What works in terms of effective teaching, seems to be high-quality instruction ,using evidence of what works, and so-called “pedagogical content knowledge”—a blend of subject knowledge and teaching craft..
The Economist recently quoted Charles Chew, one of Singapore’s “principal master teachers”, an elite group that guides the island’s schools: “I don’t teach physics; I teach my pupils how to learn physics.”
Teachers like Mr Chew, the Economist pointed out, ask probing questions of all students. They assign short writing tasks that get children thinking and allow teachers to check for progress. Their classes are planned—with a clear sense of the goal and how to reach it—and teacher-led but interactive. They anticipate errors, such as the tendency to mix up remainders and decimals. They space out and vary ways in which children practise things, cognitive science having shown that this aids long-term retention.
These techniques work, according to the Economist (11 June) . In a report published in February the OECD found a link between the use of such “cognitive activation” strategies and high test scores among its club of mostly rich countries. A recent study by David Reynolds compared maths teaching in Nanjing and Southampton, where he works. It found that in China, “whole-class interaction” was used 72% of the time, compared with only 24% in England. Certainly Nick Gibb ,the schools Minister, thinks that our teachers and schools have much to learn from the East and has focused in particular on the way mathematics is taught there.  (Singapore, South Korea, Shanghai)
So ,there is plenty of high quality evidence out there about what works and what can really help improve the quality of teaching. The problem is that too many schools don’t take this seriously enough. How to identify the best and most robust research, to manage it ,to ensure that it is disseminated to the right people, who can use it, , and to ensure that it is applied in the classroom, is still a vast challenge, it seems. Broadly, awareness of research and utilising it to inform practice all comes under the umbrella of whats called ‘Knowledge Management’. And, Knowledge Management in our schools system is simply not good enough. Depressingly a recent survey of middle leaders responsible for Teaching and Learning in schools found that just a third thought education research important. We still have a long way to go ,it would seem.



I was at a Roundtable discussion this week hosted by CMRE  in which  Professor Simon Burgess introduced discussions on  what we know about teacher effectiveness and the impact that teachers ,both good and bad ,have on student performance and attainment. The  discussions were  under  the Chatham House Rule  but  my  selected  key points listed below,  made by Burgess, draw on material already  published  by  him  and other researchers . (see Notes below )

In short, his research shows that teachers matter a great deal: having a one-standard deviation better teacher raises the test score by (at least) 25% of a standard deviation. Having a good teacher as opposed to a mediocre or poor teacher makes a big difference

Teacher effectiveness matters enormously. A pupil being taught for eight GCSEs by all effective teachers (those at the 75th percentile of the teacher effectiveness distribution) will achieve an overall GCSE score four grades higher than the same pupil being taught for eight GCSEs by all ineffective teachers (at the 25th percentile). A range of studies have consistently shown a very high impact of teacher effectiveness on pupil progress. While there are also papers contesting the validity of the assumptions required to identify true effectiveness, there is other research arguing that the results are secure.

Measures of teacher effectiveness are noisy. Numerous factors affect exam scores, from good or bad luck on exam day, through the pupil’s ability, motivation and background to a school’s resources. Research shows that it is possible to measure a teacher’s contribution to this, but it is an estimate with less-than-perfect precision. There is simple sampling variation, plus non-persistent variation arising from various classroom factors. For example, a teacher’s score is any one year may be affected by being assigned a particularly difficult (or motivated) class (in a way not accounted for in the analysis)

Experience doesn’t help beyond three years. Research shows that on average teachers do become more effective in their first two or three years. Thereafter, there is no evidence of systematic gains as their experience increases: a teacher is as effective after three years as s/he will be after 13 years and 30 years.

Good teachers are hard to spot ex ante. One of the more surprising findings to come out of the research on teacher effectiveness over the last decade has been that the characteristics that one might have thought would be associated with better teachers simply aren’t. Experience, a Masters degree, and a good academic record in general are not correlated with greater effectiveness in the classroom. These results have been found in both the US and England. We need to be careful what we are claiming here. The research shows that easily observable, objective characteristics such as those noted above, variables typically available to researchers, are no use in predicting teacher effectiveness. This is not to say that no-one can identify an effective teacher, nor that more detailed subjective data (for example, from watching a lesson) can be useful. No doubt many Headteachers are adept at spotting teaching talent. But there are enough who aren’t to mean that there are ineffective teachers working in classrooms (even in schools rated outstanding)

Very few teachers  (ie bad and mediocre teachers) are dismissed from the profession in England. (Dylan Wiliam has suggested that there are few long term benefits  in  seeking out poor teachers in order to dismiss them-much better to use your time and resources to  identify poor teachers  early  on  and give them the  crucial support they need from their better peers  to improve their teaching quality)



Aaronson, D, Barrow, L and Sander, W (2007). “Teachers and Student Achievement in the Chicago Public High” Schools Journal of Labor Economics, Vol. 25, pp. 95–136.

Chetty, R, Friedman, J, and Rockoff, J (2011). The long-term impacts of teachers: teacher value-added and student outcomes in adulthood. NBER WP 17699.

Hanushek, E (2011). The economic value of higher teacher quality. Economics of Education Review. Vol. 30 pp. 466–470

Hanushek, E A, and Rivkin, S G (2010). “Generalizations about Using Value-Added Measures of Teacher Quality.” American Economic Review Vol. 100, pp. 267–271.

Kane, T J, and Staiger, D O (2008). “Estimating teacher impacts on student achievement: An experimental evaluation.” NBER Working Paper 14607, NBER Cambridge

Kane, Thomas J, and Douglas O Staiger. 2008. “Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation.” NBER Working Paper 14607.

Rivkin, S G, Hanushek, E A, and Kain, J F (2005). “Teachers, schools, and academic achievement” Econometrica, Vol. 73, pp. 417–458

Rockoff, J E (2004). “The impact of individual teachers on student achievement: Evidence from panel data.” American Economic Review. Vol. 94, pp. 247–252.

Rothstein, J (2009). “Student sorting and bias in value-added estimation: Selection on observables and unobservables.” Education Finance and Policy, Vol. 4, pp. 537–571.

Rothstein, J (2010). “Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement*.” Quarterly Journal of Economics Vol. 125, pp. 175–214.

Slater, H, Davies, N and Burgess, S (2011). Do teachers matter? Measuring the variation in teacher effectiveness in England , Oxford Bulletin of Economics and Statistics

Staiger, D and Rockoff, J (2010). Searching for Effective Teachers with Imperfect Information. Journal of Economic Perspectives vol. 24 no. 3, pp. 97–118.



Moves to harness insights on behaviour to shape policy and its delivery

Persuading Ministers, and indeed departments, to change policy and do something differently is always a challenge.  But behavioural scientists are beginning to understand what levers they need to  pull to sell new ideas and insights that might  lead to substantial changes in ministers and departmental thinking and ways of doing things, while making savings.

The Behavioural Insights Team or ‘Nudge Unit’ was set up by the Coalition government in 2010 backed  by David Cameron and Nick Clegg .Its mission was , informed by the latest science on behaviour ,  to design policy and delivery mechanisms . Many of the assumptions made by government around how and why people make decisions are simply wrong. The Nudge Unit set out   to transform the approach of at least two major government departments, to inject a new understanding  of human behaviour across government, and to deliver a ten -fold return on its cost, all within two years.  If it failed in these, then it would be wound up. In fact, It succeeded .The Insights Team not only flourishes (within the Cabinet Office) but is even advising foreign governments now on how to implement behaviourally informed policies.

Its main objectives now are;

making public services more cost-effective and easier for citizens to use;

improving outcomes by introducing a more realistic model of human behaviour to policy; and wherever possible, enabling people to make ‘better choices for themselves’

Richard Thaler and  Cass Sunsteins  2007 book’ Nudge’ , originally called  ‘Libertarian Paternalism’ ,a clunkier and altogether  less attractive title, was such a success because it made the world of behavioural science, which had crossed over into economic thinking as well,  more accessible and of  practical  worth to policy makers.

Through small, incremental adjustments in the nuts and  bolts of government informed by  insights into human behaviour, you  can get people to respond more readily,  and to ‘nudge’ them  to make choices that  protect their interests  but also improve the returns for the government and its agents saving time money and adding value.

Whether its encouraging people to pay tax owed, or parking fines, or to insulate their houses, to save energy, to start contributing to personal pension schemes that will benefit them, to filling in applications to college, to re-entering the labour market , to seeking  childcare support –  there are a myriad ways in which simple and nuanced  adjustments can make a huge difference to take up. Even if you only increase take up by, say, 5%, for example, the financial savings can be huge.

Both political and economic theory posits that individuals make rational choices that benefit them. But the reality is that frequently people don’t  make sensible choices, and  for a number of reasons. They don’t have enough time, they have too many choices,   there is too much hassle, or friction involved , the form they have to fill in is confusing etc.  Science has found that If you make it easy  and attractive  for people and show  that others are doing it too, then you stand a greater chance of success  . Computer generated letters that are de-personalised really don’t  often work,  yet they are churned out by government and business, regardless.  Sanctions and threats often don’t work either, in the way you want them to work,  and the same goes  for  financial incentives.  Social pressure, because we are ‘social ‘animals, after all, is often  much more effective. Personalising messages and telling people what others are doing is more likely to work.

Behavioural scientists help  us to understand this esoteric area.

At the most basic level   if you personalise a letter  and make it easy to understand and  adopt the  same approach to  the  forms you send out,  removing the hassle that goes with so many,  you will almost certainly  secure  better returns. Better and more effective communications is  part of the equation of course.  But there is much  more to it than that.

The BIT is into piloting small projects, across government and using randomised control trials to test the outcomes. Simple ideas. like opting everyone in for pension schemes (ie the default position) but also  giving them the choice of opting out, means that a vast majority will not opt out, because there is  the hassle factor and friction involved. Ie you have to positively make a decision to opt out.   These initiatives not only pay for themselves but generate significant returns on top of it..

The BIT unit have produced a pneumonic checklist to help policy makers   to influence behaviour-EAST. Make it Easy. Attractive. Harness Social Influence. And make it Timely, choosing a time when  people are most likely to be  receptive.

Helping people to make the right decisions through re-framing policies and processes to take into account how they actually behave and make decisions is eminently sensible. David Halpern, who has done so much to persuade Ministers to invest in the idea that behavioural insights really can deliver more efficiencies and savings  puts it thus:

‘We seek to introduce a more realistic, empirically grounded model of what influences human behaviour and decision making’.  Halpern sees behavioural insight approaches as ‘ a tool or lens through which to view all policy interventions  and can be used to subtly refashion conventional policy tools.’

But how might this approach be used in education?

We know that many early childhood interventions can be effective and improve young children’s life opportunities- what about nudges to ensure parents are  more actively engaged in these,  and earlier. How about a nudge to encourage those   with mental health issues, to seek support,   or a nudge to encourage the most disadvantaged students and their parents   to apply to  universities or high quality apprenticeships . Indeed,   there is also surely scope  for  nudging young pupils to make appropriate choices of routes into FE ,HE, training and employment(some useful work has already been done at Jobcentres by BIT)   and  in studying  appropriate qualifications to improve their life opportunities .Or, perhaps,  targeting those in the NEET category to secure engagement in education training or a job. These and other areas surely could be susceptible to nudges that will benefit the individuals concerned, save costs, reduce waste and benefit the economy. What’s not to like?


Well, there are some worries that the government will nudge citizens to do things that are not necessarily in their interests, but safeguards are possible here and indeed  so far   appear  to be operating reasonably effectively.

At present most of the insights have produced incremental changes but it is probably only a matter of time before an insight delivers revolutionary change in the policy arena. Arguably recent pension reforms are revolutionary.

BIT has harnessed evidence and delivered results that have cost little to implement and delivered substantial measurable returns. They have also shown a welcome willingness to evaluate what they do, rigorously and ethically,(RCTs) using outside auditing and are prepared to admit mistakes with equanimity, to adapt and to learn.  Above all  they have managed to shift an initially sceptical establishment to a position  where ministers and civil servants are now prepared to engage with the BIT in the early design of policy initiatives.

Watch this space

See Inside the Nudge Unit-How Small Changes can make big differences-David Halpern –WH Allen 2015