A copy of a comment regarding the difference between customisation and adaptation, and the importance of the latter to learning content that encapsulates pedagogy.
It is a central argument of this blog that the attempt to apply technology to the improvement of education has been held back by the lack of education-specific software. Such software will generally encapsulate pedagogy. An objection to this approach was recently raised by Peter Twining in a useful discussion on his blog, EdFutures. It is a little difficult to link directly to the part of the conversation where this occurs – the best way is probably to follow the link to the discussion page and then to search for “Re Technology Enhanced Learning”, which is the title of the thread in which this discussion occurs.
To paraphrase the general objection to software that encapsulates pedagogy, such software might be seen as a way of scripting lessons that dis-empower the teacher. At the top level, I would respond that many teachers have a pretty shaky understanding of pedagogy, so the ability to put pedagogically proven tools into their hands is a key way in which we will empower (not dis-empower) teachers (see my Education’s coming revolution). As for the nature of those tools, I certainly accept that the way in which software is used in the classroom needs to be flexible, allowing the teacher (the professional on the spot) to apply the software in the right way. This provides the background to my conversation with Peter Twining regarding the customisation or adaptation of education-specific software.
Peter’s argument is that, according to an OU project in the 1990s called SoURCE, in which he was involved, the pedagogy encapsulated in software often needed to be subverted by the teacher—and that this suggested that the encapsulation of pedagogy was something of a blind alley. I copy below my reply to Peter, followed by my conclusion.
The requirement for education technology rests, not on spurious arguments about “21st century skills”, but on a long-standing need to find a way of teaching traditional skills systematically and at scale. To succeed, education has to go through its own industrial revolution, which will introduce systematic processes, backed by effective quality controls and robust quantitative evidence of effectiveness.
At a recent event in London reported by Merlin John, Tim Oates of Cambridge Assessment suggested that the British textbooks produced in the 1970s by the School Mathematics Project (SMP) and the Nuffield Science series still represented the best resources around in their respective fields. This is startling claim, coming as it does after 40 years in which we have seen a revolution in information technology and the expense of billions of pounds on technology in schools.
Two of the leading figures in the textbook publishing movement of the 1970s were both Headmasters at my old school, Sevenoaks, a commuter town 25 miles south of London. At the time that I was at the school in the late 1970s, the Headmaster was Alan Tammadge, a principal author of the SMP series for Maths. The previous Headmaster had been Kim (L C) Taylor, who had resigned from Sevenoaks in 1970 to become Director of the Nuffield Resources for Learning project.
Resources for Learning was also the name of a book, written in the following year, in which Taylor provided a justification for the Nuffield programme. He questioned whether the comprehensive education that was being introduced in the UK at the time was realistic. His concern was not about the phasing out of selection: the problem that caught Taylor’s eye was the fact that the new system of secondary education was to be universal. Traditional education had always been provided to a small elite based on a model of the teacher-as-craftsman. So long as we clung to that model, Taylor argued that there would not be enough sufficiently well qualified teachers to go around.
Now that the use of the term “ICT” is coming under increasing scrutiny in the schools sector, many are making more use of the term “TEL” But “TEL” has similar flaws to “ICT”, as was brought home to me when attending the Online Educa Berlin conference last week.
Technology Enhanced Learning (TEL) has for some time been the preferred term for the academic community when referring to the application of technology to the improvement education. It has been the title of various funding streams of the European Commission (such as TeLearn and TELNET. The only slightly different “Technology Supported Learning” appears in the strap-lines of conferences such as Online Educa Berlin (which I attended last week) and Learning Technologies, to be held in London in January.
This post makes the case that the HE “TEL” community has been just as ineffective as the schools-level “ICT” community at delivering real improvements in education—and that some of the key reasons for this failure are embedded in the terminology itself.
Interoperability is critical if we are to build a market for educational technology, a market which will in turn enable the pedagogical innovations capable of transforming education. This post identifies six interoperability specifications which would take the first steps in this direction.
I will start by painting a quick picture of the overall education technology ecosystem towards which I think we should be aiming. I will then describe the six standards for data interoperability that I think will provide the foundation for the market that will be needed to deliver that ecosystem. These are standards for:
Whatever happened to Michael Gove’s “serious, intelligent conversation about how technology will transform education”?
The clue to the mystery of missing racehorse, Silver Blaze, was provided by “the dog that did nothing in the night-time”. It was the absence of any barking as Silver Blaze was removed from her stable that aroused Sherlock Holmes’ suspicions that it had been the stable manager himself had taken the horse.
When called upon by Michael Gove to engage in “a serious, intelligent conversation about how technology will transform education”, the education technology community proved almost as unresponsive as the dog in Silver Blaze’s stable. If it woke up at all, it was only to wag its tail.
Michael Gove did not only call for a “serious intelligent conversation” in his BETT 2012 speech, he also told people where that conversation was to happen. Naace and ALT had already set up a discussion site at www.SchoolsTech.org.uk, where they hosted the conversation over the second half of January and February 2012, with the collaboration of the DfE, which provided the stimulus questions. In July 2012, Naace and ALT published the conclusions of the conversation in a joint report, Better learning through technology (BLTT).
A copy of a response to a thoughtful New Statesman article. The article claims that Gove’s reputation is built on a myth because (1) his claim to be reintroducing rigour will turn out to be bogus; (2) he is centralising power in Whitehall and not, as he claims, in the hands of parents; (3) that the benefits of academies will not spread beyond a few model schools; and (4) that the claim to put an end of Labour’s white elephants (ICT and BSF) fails to recognise the continuing need, at least to update the school estate.
The jury is still out on point (1). With respect to (2) it is faulty logic to argue that because Whitehall is becoming more powerful at the expense of local authorities, therefore parents may not also become more powerful. But although I am a supporter of what Gove is doing, I tend to agree with the New Statesman on points (3) and (4). Below is a copy of my comment submitted on their website.
A resume of a break-out discussion at the JISC/CETIS annual conference
For me, the highlight of the 2012 JISC/CETIS annual conference was Adam Cooper’s session on “Mapping Cause and Effect”. Adam asked participants to create diagrams which traced chains of causality (both negative and positive) through to a final, desirable, pre-defined outcome.
I joined a break-out group with Colin Smythe (Chief Architect at IMS ), Tore Hoel (Oslo and Akershus University, Sweden), Malcolm Batchelor (JISC/CETIS), and Seyoung Kim (Carnegie Mellon University, USA). We chose to analyse what preconditions would favour or disfavour the use of analytics to improve the quality of course materials, taking course materials to be synonymous with learning content. We envisaged a scenario in which the author of a digital course might be able to track the performance of students taking the course. Having discovered from this data which parts of the course worked well and which worked less well, the author could improve the quality of the course materials.