Recently, the International Association for the Evaluation of Educational Achievement (IEA) released the results of TIMSS (Trends in Mathematics and Science Study) covering mathematics and science in 63 countries and PIRLS (Progress in Reading Literacy Study) in 48 countries, while the OECDs PISA study has just finished testing in schools for its next report, due in 2013. And Pearson of course has just published its Learning Curve index, (in which England does rather well)
How is it possible for these surveys to come up with apparently contradictory results?
Pasi Sahlberg, a prominent Finnish educator and author of the award-winning book “Finnish Lessons”, (book of the year ,according to Lord Adonis) says that TIMSS and PISA are technically different studies, although they both build on similar measurement methodology. The simplified distinction between these two studies is that whereas TIMSS tests students’ mastery of what have been taught from the curricula, PISA assesses how students can use the knowledge and skills that they were taught in new situations (ie to problem solve). These both are student assessment studies. Pearson’s “The Learning Curve” index ,on the other hand ,consists of different indicators and is therefore a composite index. The problem with any study that relies on composite index is that it is open to designer manipulation. “Global Economic Competitiveness Index” and “The Best Country in the World” are good examples, similar to “The Learning Curve.”
Sahlberg notes these international standardized tests are becoming global curriculum standards . Indeed, OECD ‘ has observed that its PISA test is already playing an important role in national policy making and education reforms in many countries. Schools, teachers and students are now prepared in advance to take these tests.’ So teachers are now, in effect, teaching to these test even if they aren’t aware of it. Learning materials are adjusted accordingly to fit to the style of these assessments. Life in many schools around the world is becoming split into important academic study that these tests measure, and other not-so-important study that these measurements don’t cover. One has to wonder whether or not this is a good thing. Quite a lot of what happens in schools and is important to children’s ‘education’ and ‘ learning’ either isn’t or cant be , reliably tested.
A McKinsey report, also just published, tells us how education systems aren’t preparing young people adequately with the necessary skills for the job market , while education providers are far more optimistic than employers, or youth, that graduates are adequately prepared by their institution for entry-level jobs in their chosen field. We should take heed of these perceptions.At a time when educationalists worry about teaching to the test and the perceived failure of education systems to deliver rounded individuals, with the right balance between cognitive and non-cognitive skills, aren’t these tests taking systems in an opposite trajectory?It’s certainly worth asking the question.