On his Spannerman blog on 11 January, John Spencer announced that “BETT opens as ICT is scrapped by Gove”. This was a misleading title. What Michael Gove scrapped was “the current, flawed ICT curriculum”, which he called on the industry and awarding bodies to replace. Mr Gove made it clear that “ICT will remain compulsory at all key stages”—and he himself continued to use the term “ICT” throughout the speech.
I am not reporting here that ICT has been scrapped, but arguing that as a term “ICT” ought now to be scrapped—and that the changes being initiated by Mr Gove probably will result in this happening. An article today by the Guardian’s Digital Literacy Campaign uses “ICT” five times, “IT” six times, and “computer science” five times. This shift in the use of terminology will ultimately change the way we think about what we are doing.
According to his website, Stephen Heppell “is credited with being the person who put the C into ICT”. According to Wikipedia, the first official recognition of the acronym came in the Stephenson Report of 1997—a report commissioned by Tony Blair and on whose panel of seven, Professor Heppell doubtless wielded a disproportionate influence. Becta was established in the following year (also with that tell-tale “C” in its title), in effect to be the executive agency tasked with implementing Stephenson.
The term “ICT” has therefore:
- been driven by an educational agenda, and not by use in the wider technical community;
- coincided with a Labour-initiated and Becta-managed project of educational reform.
Right from the start, the rationale for the new term was confused. The Stevenson report merely noted that it “seems to us accurately to reflect the increasing role of both information and communication technologies in all aspects of society”. This misses the point. For an educationalist like Professor Heppell, the significance of the change lay not in the fact that children were now going to study modern communications technologies—but in the fact that they were gong to use them. In this sense, putting the “C” into “ICT” meant moving away from a didactic or “instructionalist” style of teaching. Communications would not principally be about accessing the teacher’s expertise, but about peer-to-peer collaboration, enabling styles of learning in which the teacher played an increasingly marginal role. It was about changing the orientation of education from the vertical to the horizontal. The “C” was about Web 2.0, and on his blog Professor Heppell celebrated “community, collaboration and creativity as ‘C’ words too”.
Most progressive teachers welcomed this as a step forwards, particularly when seen against the background of Computer Based Training, the orthodox style of e-learning in the 1990s. This had generally offered learners a repetitive and uninspiring pedagogy, in which they were shown a succession of expositive pages, followed by a multiple choice quiz to check that the previous factual material had been thoroughly absorbed. But although “ICT” represented a significant advance in thinking over the old “CBT”, we should not allow our thought to be trammelled by the perception that the world is made up of binary antitheses: that x must be good just because y is bad. As the master of the sound-bite himself had it, we should always be looking for the third way.
When Professor Heppell claims that “C” stands for “community, collaboration and creativity” as well as “communication”, he unwittingly revealed part of the problem. While “communication”, “community” and “collaboration” fit together comfortably, “creativity” is an awkward fourth member of the alliterative patter. Highly collaborative, consensual societies have nearly always been bad at creation and innovation: most great acts of creativity have been the result of individual insight and endeavour and have frequently been fiercely opposed by the so-called “wisdom of the crowd”.
The lionising of communication has also fitted into a pattern of progressive thought which has deprecated the importance of knowledge. In the late 1980s, it became fashionable to promote the importance of skills in opposition to knowledge, another false dichotomy which ignores the fact that “knowing how” to do something is just another form of knowledge, and is often dependent on a considerable amount of “knowing that”.
The current fashion (as Mr Gove remarked in his BETT speech) is to suggest that personal knowledge is no longer required as everything can all be found on the internet. Not only does this create the sort of “single point of failure” which would delight writers of dystopian fiction (I write this on the day that Wikipedia has been taken down as a political protest)—it is also nonsense. As Gove argued in his March 2011 speech to the SSAT, if you were to get on a plane and ask the pilot “Tell me, do you know how to fly this plane?”, you would not be happy if he replied “No… it’s all there on Google and Wikipedia, isn’t it?”.
Not only does learning by doing need to be balanced by the acquisition of knowledge; but “doing” includes many forms of activity other than communication: individual activities like competitive gaming (the forgotten “C”), using creative tools, and undertaking research. Education needs to balance many things: creativity and knowledge, group-work and individual endeavour, divergent thinking and academic rigour. Virtue is a middle way.
An acronym which privileges one type of pedagogy over others may not be particularly helpful in this respect—but this is not sufficient reason to abandon a term which has achieved common currency. The etymology of a word or acronym is not that important: who really cares whether we talk of “ICT”, “IT” or just “technology”?
What matters about words, I suggest, is not their etymology but their granularity—the degree of precision that they allow. This is the important test that “ICT” fails because it refers to two quite separate things:
- the teaching of ICT as an end of education;
- the use of ICT as a means of education.
Technology can be used (for example in the form of interactive whiteboards or MIS) without students necessarily understanding how they work or even using these technologies themselves; while large parts of computer science are commonly taught using traditional blackboards and textbooks.
The failure to distinguish between technology as a means of education and technology as an end of education reached its high-water mark in the Rose Review, which was predicated on the fact that ICT was “’an essential skill for learning and life’, comparable to literacy or numeracy”, lamenting the fact that the “use of ICT is not sufficiently embedded in curriculum goals and design”. In this model, there is no significant distinction made between using ICT and teaching ICT (or more generally, between what should be taught and how it should be taught). From this perspective, the failure to use ICT successfully in schools should be blamed, not on the lack of appropriate strategic leadership from Becta, but on the weak “digital literacy” of the poor bloody infantry.
It is true that the teaching and use of ICT in schools might be synergistic; but so are they frequently antagonistic, as when children use their ICT skills to produce plagiarized work, are distracted by off-task social networking, or focus on prettifying their work rather than on its academic content.
The relationship between ICT-as-a-means and ICT-as-an-end of education needs to be carefully examined—and this cannot be done unless we have two different words for the two different things that we are talking about. By using the same “ICT” term, Becta consistently failed to deliver this kind of clear thinking.
The muddying of the waters may well have been deliberate. Becta’s justification for its own existence came increasingly to rely on its role in delivering “aggregated procurement”, which it (falsely) claimed to be saving the taxpayer a great deal of money. ICT was driven by supply-side interventions. Large amounts of hardware were pushed into schools, (often by government contractors with no real educational background) and with little clarity on how it was to be used. Many argued that such a justification was unnecessary: ICT was just a good-in-itself, an up-to-date, 21st century sort of thing. The fact that ICT required no justification suited Becta very well.
Second, ICT became increasingly identified with the attempt by campaigners such as Professor Heppell, Sir Ken Robinson and Lord Puttnam, not to improve the efficiency of education but to change its goals. Teaching people “twenty-first century skills” would not just so much improve their learning in other subjects but change the nature of the curriculum, moving the emphasis away from the acquisition of knowledge, away from traditional concepts of academic rigour and excellence that only the best can attain; and towards self-expression, a world in which everyone’s opinion was equally valuable, and a conception of education as an intrinsically egalitarian project.
I shall be examining this perspective in more detail in future posts—but it is sufficient here to note that all these campaigners urged not only the introduction of ICT as a means of improving learning, but have also suggested that the benefits of this approach will only become apparent when we have changed the way in which those benefits are evaluated. They tacitly admit that they cannot show that ICT is improving learning when this is measured against traditional objectives, traditionally assessed. In these circumstances, the intellectual fog created by the poorly defined term “ICT” provides useful cover for a campaign which has been at heart political and not technological.
So long as the education technology community continues to advocate wasteful processes of aggregated procurement and politicised campaigns for educational reform, it cannot expect to be taken seriously by a government which is in favour of cutting bureaucracy in its administration, and of re-introducing academic rigour into the curriculum. If it is to make the case effectively for more technology in schools, the community would make a good start by adopting language which allowed for a clearer discussion of what it was trying to do, and how it was trying to do it. To this end, I suggest that it is time to scrap “ICT” as a term, both for its ambivalence and for its political baggage, and speak instead:
- of “education technology” for the use of IT to improve standards of learning;
- of “computer studies” for the teaching of IT as a part of the curriculum.
 A theory advocated by Aristotle in Book 2 of his Nicomachean Ethics (http://classics.mit.edu/Aristotle/nicomachaen.2.ii.html). This perspective contrasts favourably with the more common dualist perception of good and evil, advocated by monotheistic religions and twentieth century ideologies, which encourages enthusiasts to push what were originally sensible positions to the point of excess.
 Paragraph 22, page 7, http://www.educationengland.org.uk/documents/pdfs/2009-IRPC-final-report.pdf.
 Ibid, paragraph 17, page 7.