The OECD report referred to in the very first line reports results that are hardly rocket science to anyone who has been involved in teaching secondary school students at any time in the last twenty years. Any teacher will tell you that it is not the presence of the machines that makes a difference to the teaching and learning that goes on in a classroom but the use to which they are put. In fact, incorrectly used, technology can have a negative impact so the introduction of any new technology has to be accompanied by sufficient training and resources to allow said technology to full effect; sadly this is rarely the case, in the UK at least. Smartboards, pcs, laptops, tablets, money has been thrown at introducing all of these into the classroom but rarely has the expenditure on hardware been matched by resources being provided for the training that will allow these to be used most effectively. Teachers are usually left to try and work them out for themselves having been given some rudimentary and all too brief introduction by a sales manager on their introduction into the classroom. Of course SMT, having spent a large slice of their annual capital budget on these innovations, want and expect to see results instantly. And then there’s Andreas Schleicher – how much does he get paid I wonder for stating the blindingly obvious? But, as the BBC News article makes clear, his role in this is to sell more PISA tests, especially given the competition in this particular marketplace from TIMMS……. Thus we come to the crux of the problem – too much emphasis on high-stakes testing as the measure of educational outcomes, as if students are just commodities on a factory production line.
As long as teachers, schools (and, thanks to the likes of PISA and TIMMS, nations) are going to be measured by their results on such high stakes summative assessments then the results of these assessments are going to be the subject of manipulation and gaming. To give an example: a Head of Dept in an English Secondary School is informed by SMT at the start of the year that he, and his dept., will be deemed to be successful or otherwise on the basis of their GSCE results – the target for the year is 75% of students gaining grade C or above. More than 75% – plaudits all round and maybe even a payrise, less than 75% and the blame game starts and the HoD’s job is on the line. It doesn’t take a genius to work out the result – C/D borderline students are targetted to ensure that as many Ds as possible are turned into Cs. A student who is expected to gain a grade A is effectively ignored. They are safely over the threshold anyway and therefore are not going to impact on the all-important C or above percentage – no matter that with a little extra help they might have gained an A* or had a more fulfilling experience immersed in their subject. The student expected to gain an E or below – well, they are never going to improve by 2 grades so again there’s no reason to invest valuable time or resources as they are not going to make an impact on that all important 75% either so they are left to drift. Bored and alone they are quite likely to vote with their feet before the end of the school year – much to the relief of the subject teacher as their boredom had turned into disruptive behaviour and meant that valuable time that needed to be spent with the target group had to be “wasted” in dealing with that behaviour.
Internet-enabled exams are not the “simple solution” suggested, but that’s not to say that they may not have a place in a future assessment scheme. The first thing that always springs to my mind when technology is trumpeted as the new panacea is – what happens when it breaks. And break it always does, usually at the most inconvenient time. Internet connections fail, electricity supplies are interrupted, software is corrupted or just plain doesn’t do what it is supposed to. Then what happens?
Technology does have a role to play in a new assessment regime but a supporting role, not a central one. The only way to remove the tyranny of high-stakes summative assessment is to replace it with assessment which has taken place over a longer time period, been more formative in its approach – in other words, continuous assessment. The kind of questions envisioned by the authors can then form part of the assessment process alongside more traditional methods of assessing learning. Then assessment can be where it belongs, at the heart of teaching and learning, wherever it takes place (which may or may not be in a traditional classroom, or even school).
We’ve been here before, or close by anyway, in the late 1980s with the introduction of GCSE examinations and the National Curriculum. The original concept of the NC was based on students providing evidence, throughout their school careers, that they had mastered certain skills and knowledge defined by the government as being appropriate for a particular level. The problem with this, as I remember only too well at the start of my teaching career, was record keeping and recording the evidence – this is one area where the technology available today could certainly make everyone’s life easier. GCSEs – most, if not all, of them had coursework elements where students were given open-ended tasks and encouraged to think for themselves. Over the years these were gradually killed off because of gaming of the system by unscrupulous (or over-pressurised) teachers and students but also by a succession of government ministers of education who just didn’t understand education and wanted a return to the O Levels and A Levels that they had to take because that’s what they understood and that’s what they deemed valuable.
There have been some attempts by some Examination Boards to try to introduce examinations that emphasise research, critical thinking and problem solving skills, that aim to develop those very “21st century skills…….associated with the effective use of multiple forms of technology.” asked for in the article. Cambridge International Examinations have, for example, an A Level in Global Perspectives and Research and an AS Level General Paper both of which assess the 21st century skills referred to in the article. The problem? The Universities which choose to specifically exclude as eligible for consideration for entry requirements such exams (including, I am ashamed to say, Newcastle University)meaning that students are reluctant to invest the time and effort required to take them.
Schools’ curricula are driven by external examinations, examination boards are driven by the universities and, increasingly, by the demands of commerce and industry so until the Universities and employers themselves start to value “21st century skills” by encouraging applicants for both degree courses and jobs to take these new subjects and examinations, then I’m afraid that we will just have to make do with what we have, even though most right minded people realise that the current system is obsolescent and becoming ever more unfair and unfit for purpose.