A ‘Ratner moment’ opens up a debate
Cambridge Assessment has launched an internet debate on exam standards.
Tim Oates of CA started the ball rolling by saying that investigating possible grade inflation, and publicly acknowledging that there may have been subtle drift and that a re-orientation of standards might be required, may sound like a ‘Ratner moment’ (a reference to the jeweller who sent the value of his shops plunging by admitting that what they sold was “crap”) for awarding bodies. But “constantly enhancing the ‘accessibility’ of questions, the transparency of marking schemes and the precision of guidance can ease up the numbers gaining the highest grades.” acknowledged Oates. More an enlightened, rather than a Ratner moment, I would suggest. It has certainly catalyzed the debate with no collateral damage done to his board.
With the access agenda firmly established, with A level participation increasing massively, and grade attainment creeping upwards (7% gaining three A grades at A level in the mid-90s, and 17% last year), there are it, seem, legitimate public concerns over perceived ‘grade inflation’ and ‘debasement of currency’.
Oates said it would be profoundly dysfunctional for awarding bodies to be discouraged from looking precisely and critically at the techniques and approaches they have been, and are, using. It is precisely this kind of self-criticism which has enhanced airline safety so significantly and tangibly he noted.
Since the government has become heavily reliant on the nation’s exam results as a means of measuring the success of government policy it not only has a clear incentive but, equally important, the capability to influence the exam system, which it clearly does, and this goes to the heart of the matter. It is clear that many stakeholders lack confidence in the system and with exam boards now asking themselves questions , such a debate is long overdue.
According to Professor Roger Murphy, from Nottingham University, exam grades are “approximations” that do not stand comparison over time or between subjects. Educational assessment is not, he claimed, an exact science, and he criticized exam boards for implying otherwise , adding that those who try to turn it into such a thing generally start to distort the systems of assessment away from the complex curriculum areas which GCSE and A-level examinations seek to assess. He added that because the curriculum to which examinations relate is changing fairly constantly, comparisons of examination grades across years can actually be very misleading. Because grades can only be defined in relation to specific subject-based syllabuses, they do not have much accuracy when compared between examinations in very different subjects.
So if Professor Murphy is correct, then it is fair to conclude that Government claims that A level results are improving cannot be substantiated using evidence as they are not comparing like with like. It is equally difficult to find clear evidence that they are getting easier.
Oates was keen to define what we are talking about when we refer to standards. We need in the first place to be clear about the differences between standards of demand, standards of attainment, and content standards. Standards of demand – are the things which an assessment requires of the people who take it.
Content standards – on the other hand are not quite the same as standards of demand. Content standards are instead associated with the value or relevance of the things which the examination includes, or the ‘domains’ which it is assessing. Content standards decline when an examination becomes old-fashioned, redundant or irrelevant.
Standards of attainment – the outcomes (results) which students attain when they take an examination. If successive assessments are all at the same level of demand and the students know more or are better prepared, for example, then they attain more. Thus, the standards of attainment – overall results – improve.
The jury is still out on whether or not the new regulator OFQUAL can help restore trust and confidence in the system. Many, including Cambridge Assessment harbour real doubts, It would seem sensible that responsibility for standards should rest on meaningful working relationships and links between schools, Higher Education, employers and awarding bodies, rather than as now mediated through Government and its agencies who have a vested interest in ensuring that year on year exams results are perceived to improve. It’s a no brainer.