Its the technology, stupid!

wheelThe consensus is that we should not mind the technology but that we should focus instead on the learning. The consensus is wrong.

This is the transcript of a presentation I gave at the EdExec conference, held by ICT Matters in London on 6 November 2013. The ostensible argument in my talk is that “procurement matters”, which I will admit, probably isn’t going to set your heart racing. But perhaps it should. The reason why procurement matters is that technology matters – and this is a point that much of the ICT community do not generally admit. Time and again, you hear the old saw being repeated, “never mind the technology, where’s the learning?” Most of my talk addressed addressed this point—and in doing so, I take on (as is my wont in this blog) a lot of shibboleths. I summarise some arguments with which those of you who have read previous posts may be familiar, and I also shadow some arguments that I will develop in greater detail in future. And I return to a promise that I made in my first post to this blog in January 2012, which is to discuss in rather more depth than I have done before why Becta’s approach to procurement was so lamentable.

Introduction

You hear a lot of talk about “the wisdom of the crowd”. It underpins a lot of theories about Web 2.0 and social learning.

What people generally mean by this phrase is that the majority is right most of the time. But what is clear from that belief is that the majority has not read the book.

wisdom of crowds

The Wisdom of Crowds, written by James Surowiecki in 2004, does not argue for consensus. Quite the opposite. It argues that crowds are very often foolish:

  • when they indulge in group-think;
  • when herd together for defensive reasons (“no-one ever got sacked for buying IBM”);
  • when rumour “cascades”, through crowds that lack basic information;
  • when innovation is required.

According to Surowiecki crowds are only wise in specific circumstances.

The best collective decisions are the product of disagreement and contest, not consensus or compromise… The best way for a group to be smart is for each person in it to think and act as independently as possible.

It’s a paradox. As soon as the crowd thinks it’s wise, you can be sure that it isn’t.

The Wisdom of Crowds argues in favour of markets and betting shops, not network-driven consensus. I am going to talk about markets later. But first, I am going talk about our own sense of consensus.

Slide 1

1: That the current orthodoxy is broken

Because it is my contention that we have all be far too subject to group-think, to herding, and the cascade of misinformation. It is my contention that almost everything we believe in the ICT community is wrong.

The misapprehension starts with the very words we use.

The report, Shut down or Restart?, published by the Royal Society in January 2012, recommends that we stop talking about “ICT” altogether on the grounds that the word has five different meanings. Most importantly…

Slide 2

The difference between using ICT and teaching ICT

it refers both to the use of ICT (to improve teaching and learning across the board and to the teaching of ICT as a subject.

We can use clickers and whiteboards while knowing very little about how they work; and we can learn how they work without having to using them.

Slide 3

Deconstruction ICT (1)

We also use “ICT” to refer to the infrastructure – the computers, the peripherals, the network – all the stuff that we buy in order to deliver various applications.

So it is on the right hand side that we can say that we buy all this equipment to teach ICT…

Slide 4

Deconstruction ICT (2)

– in the subject now called “Computing” – and to use ICT to improve education – in other words, for “education technology”.

We could in turn divide “Education Technology” into technology for…

Slide 5

Deconstructing ICT (3)

administration – in other words, the MIS – and technology for learning. And on the learning side…

Slide 6

Deconstructing ICT (4)

…we have things called learning platforms and other stuff called learning content.

That’s how the cookie crumbles when we analysing what we mean by “ICT” But from an education technology perspective, I think we could do better.

Slide 7

Deconstructing ICT (5)

First, I am not so sure that there is significant difference between the learning platform and the MIS. I think we see them as distinct because they developed at different times – but as learning technology catches up with our administration technology, I think you will see these two infrastructures merge.

Second, why should the subject Computing get this privileged position, when there are so many other subjects that also use technology? Music uses synthesizers, Geography uses weather plotting software, DT may start using 3D printers.

And this has been part of the problem. David Brown, who gave the keynote this this morning, has no remit for education technology: he is a subject inspector. NAACE is a subject association. In every school up and down the country, the requirement for education technology has been seen through the lens of – and confused with – one particular subject on the curriculum.

So I would draw the diagram for education technology a little differently.

Slide 8

Deconstructing ICT (6)

Here, the “Primary hardware infrastructure” supports the learning platform (including the current MIS, the current learning platform, future systems like learning analytics and e-portfolio). The learning platform supports learning content. And learning content manages domain-specific software tools.

Slide 9

Deconstructing ICT (7)

The primary infrastructure and the domain-specific technology are both generic. You can probably buy them on Amazon if you wanted. These are not our technologies.

The learning platform and the learning content are both education specific. If we don’t develop them, no-one else is going to do it for us. And that is where we have had a problem.

Our first difficulty in building good education technology is that we don’t have such a great understanding of what teaching and learning involves. The research base is poor.

Slide 10

Tooley report

The Tooley Report was published by Ofsted in 1998. It analysed all papers that had been published over the previous four years in the four most frequently cited academic journals. A sub-sample of 40 papers was then selected for in depth analysis.

The report found that only 10% of the research was about teaching and learning. Only 15% of the research used robust, quantitative evidence. And only 36% was free from major methodological flaws.

If all those three variables are independent (which they are probably not) you would be able to say that the proportion of research that was about teaching and learning and that used robust quantitative evidence and that was free from methodological flaws, was about 0.5%. We can’t say that with certainty – but we can say that the proportion of educational research that is useful to our purpose is diminishingly small.

It is also clear from the Tooley Report that the process of peer review is dysfunctional and that there is almost no process of “contestation”, as Surowiecki put it. No-one tries to repeat other people’s trials, hardly anyone challenges other people’s assertions, most academics play a game of “academic Chinese whispers”. If someone else said it, particularly if that someone has a reputation, it must be true.

Slide 11

Teacher Proof by Tom Bennett

The result, as Tom Bennett has argued very effectively in his recent book, Teacher Proof, has been a long list of bogus educational theories, none of which are backed up by any significant evidence, many of which have been actively disproved, but most of which have been lapped up by what too often appears to be a gullible profession.

Slide 12

Steven Heppell - credentials

Its against that background that I want to talk about Professor Stephen Heppell, who has been an influential figure in the world of ICT. You could almost say that he has been its architect. He claims for himself the epithet of “the man who put the C into ICT”, having been an author of the  influential 1997 Stevenson Report. He was called “Europe’s leading online education expert” by Microsoft; and “The most influential academic of recent years in the field of technology and education” by the DfE. He is giving one of the leading keynotes at the BETT Show next February.

Slide 13

Heppell interview

Here is an extract from an interview he gave to BBC Radio 4’s Today programme in 2008, in response to the publication of the Rose Review (listen from 03:40 through 04:45).

The first thing to notice is that Professor Heppell doesn’t answer the question (the mark of a politician, you might say, not an academic). But because he ducks the question, we can be fairly sure that the schools that follow the Heppell way do not do better in the tests. So if in your school you are interested in doing better in the tests, then perhaps you should bear that in mind before you listen to what Professor Heppell says.

He is clearly wrong in his prediction—the tests are not ebbing away. But we shouldn’t put too much weight on that because it wasn’t really a prediction—it was a rhetorical trick, of a sort that Professor Heppell uses frequently. Instead of answering the question, instead of even arguing a case, instead of saying that he wants the tests to ebb away, he says that they are ebbing away. What he is really saying is that the question isn’t relevant any more, the matter’s been decided, the train is already leaving the station, so there’s no point in disagreeing, it’s game over. Except, of course, it isn’t.

But maybe, in spite of all the constraints of accountability and the pressure of the league tables, you still have some sympathy with what Professor Heppell is arguing; perhaps you have a sense the tests might indeed be impoverishing the quality of our education.

So before I address that concern, let me point out that both the assertions in the final paragraph are also untrue.

It is not true that we live in a world in which we have never met anything before. When I get ill, I go to the doctor, he tells me – if I take my medicine – how long it will take to get better, and he will tell it to me with a greater accuracy and reliability than would a non-expert, precisely because he has seen the same thing many times before. The ability to predict the future is the essential capability that comes with expertise. You want to know if a particular bridge will bear the weight of a particular lorry in particular wind conditions? Ask a civil engineer.

If you don’t believe in the possibility of predicting the future, then you don’t believe in the existence of expertise. And that is a strange position for a professor of education, given that nurturing expertise is what education is meant to be all about.

It is also not true that our tests have necessarily been based on a model of ‘have you met this before’ – but that is a point I will come to in a second.[Comment by Stephen Heppell, who disagrees that this slide represents his position fairly]

Slide 14

slide 113

So if you were to list all the things that Professor Heppell stands for, you will find that quite a lot of also appear on the list of Tom Bennett’s list of barmy theories for which there is no evidential support.

And you will also find…

Slide 15

slide 114

…that they are a fairly typical expression of the more general theory of constructivism. This states that

  • learners construct their own knowledge;
  • through a process of active discovery;
  • and that, as argued by Jean Piaget, this is done in response to an internal developmental route-map;
  • and that knowledge, being the private construct of the individual, is not true in any objective sense.

I agree that point 2 has merit. But points 1, 3 and 4 all imply that self-expression is more important than objective truth—and that position shows a shaky grasp of both psychology and epistemology.

Nor does the promotion of self-directed learning necessarily increase the amount of interactivity experienced by the learner—points 2 and 3 often seem to work against each other—and not only because the self-directed learner might prefer to slouch on the sofa with a large packet of crisps, but because the self-directed learner does not know what they would best be doing, or how to do it.

Slide 16

slide 115

In her recent book, Teaching as a Design Science, Professor Diana Laurillard from the Institute of Education, comments on what tends to happen when children are released onto the internet in the expectation that they will become independent learners (see slide above).

Slide 17

slide 116

And Professor Laurillard’s conclusion is confirmed by one of the early Ofsted reports on ICT in education (see slide above). “Internet research”, which sounds so advanced and grown up, so compatible with theories of independent learning, is too often in practice an easy homework that requires no preparation, no marking, little work and even less thought.

Slide 18

slide 200

So how would we use technology if we want to do more than just transmit information?

Slide 19

slide 201

Here is Professor Eric Mazur, who teaches Physics at Harvard, asking the same question at a recent Specialist Schools and Academies Trust conference (listen through to 20:08).

Don’t you find it surprising that a hall full of teachers cannot answer what must be one of the simplest and most important questions about their own professional practice? “Challenge” and “feedback” are warm, I suppose, but neither gives an accurate answer to the question. And I find “personality” and “relationships” very revealing—because that is how I think most teachers understand their job (see my recent Why teachers don’t know best for the full argument). As long as they can motivate, as long as they can inspire, as long as they can get the children to like them, the process of teaching isn’t really so important. Teachers see teaching very much as a personal craft, and not, as Diana Laurillard calls it in her book, a “design science” like engineering.

Another misconception is this business of assessment, which is commonly said either to be a distraction, or reductive. Don’t, whatever you do, teach to the test.

But as Professor Mazur explains, practice is an essential part of effective teaching. And once the learner is practicing, why would I, as the teacher, not want to track what is going on, so that I can offer criticism and support, so that I can suggest what the student should do next? And once my student is practicing something, and I monitoring that practice, how am I not assessing them? Assessment is not, as most teachers think, a distraction from teaching. It is its very essence. Which of course does not excuse the damaging effect on education of badly conceived tests.

And Professor Mazur also explains why Professor Heppell is wrong to say that tests are inevitably built on a model of ‘have you seen this before’. Because the whole point of practice, the whole point of good assessment, is to ask the student to apply the abstract knowledge and skills that they have acquired to a new context. Only then do you know that they have understood and not just remembered.

Slide 20

slide 202

This is why Seymour Papert, who wrote Mindstorms in 1980, replaced the term “constructivism” with “constructionism”. The difference lies in what you are constructing. In constructivism, you are constructing knowledge. In constructionism, you are constructing a robot, a computer program, a song, whatever. As you do that, you find that you are also constructing your “intellectual structures” (just as Professor Mazur explains the process). But there is a big difference between constructing intellectual structures (or “belief” for short) and constructing knowledge. Constructionism takes the interactivity and discovery from Papert, but rejects the child-centred relativism.

Papert’s observed that children learnt rapidly when they had the physical world to explore and bump up against—but this sort of environment was not available to them when they were tackling more abstract learning later on. So his proposal was to create “microworlds”, simulated, concrete, explorable environments that allowed older children to learn about abstract concepts in the same way that younger children learn about physical world.

Though it is a compelling proposal, thirty years later we can say that it hasn’t really worked. One very important reason is that the opportunity to develop such “microworlds” has, in the words of Diana Laurillard, been “tragically under-exploited”. No one developed the technology—and technology matters.

Slide 21

slide 203

Papert’s microworlds proposal is one specific example of a general approach to activity based learning—an approach in which the learner performs an action and some “other” provides a reaction. The “other” could be a physical object, an environment, a peer, a teacher, or a computer program (see my In the beginning was the conversation, for the full argument here).

When this interaction becomes conversational, we have…

Slide 22

slide 205

…the Socratic dialectic. Socrates used to sit down in the market square with his pupils and engage them with questions.

It is a model that has passed the test of time. In the nineteenth century President Garfield of America described his ideal learning environment as:

a log hut, with only a simple bench, Mark Hopkins on one end and I on the other.

And the same model is still used in the tutorial system at Oxford and Cambridge. It is the pedagogical gold standard, great in every way, except one: it doesn’t scale. Track a pupil through the school day and find out how much time they spend in meaningful conversation about their work with their teachers. However much we might try to encourage “dialogic” teaching, there are not enough Socrateses to go around. And there never will be.

Slide 23

slide 206

This problem was spotted in 1970 by Kim Taylor, Director at the Nuffield Institute. The problem that Taylor saw with comprehensive education had nothing to do with selection. The problem was one of scalability.

The problem with the undersupply of properly qualified teachers is of course a particular concern in shortage subjects—Computing is the case in point at the moment. But, given time, you can probably find someone to do the job, if you are prepared to accept a high level of variability in the quality of your teachers.

Slide 24

slide 207

Professor Dylan Wiliam, who is a researcher who has developed a reputation for looking closely at the data, suggests that the research shows that there is a variability of learning outcome, depending on which teacher you get, with a factor of 4. I am not sure that we would find it acceptable if we found such inconsistent outcomes in the death rates associated with different surgeons in the NHS.

It is not that we don’t know how to teach—the problem is we don’t know how to teach at scale.

Slide 25

slide 208

The Teacher Toolkit is produced by Durham University and sponsored by the Education Endowment Foundation. The strategies which are rated as most expensive and least effective include smaller classes and teaching assistants. In both those cases, that means hiring more staff is not particularly effective. What is most effective and least expensive are things like feedback, meta-cognition (which is understanding the learning process) and peer tutoring—all of these represent the sort of interactive, practice-based pedagogies that I have been discussing.

If hiring staff seems to be so ineffective and in many subjects so problematic, isn’t it fairly obvious that we should be trying to do more of the stuff in this top left hand corner, and that in view of the expense, the ineffectiveness and the difficulty of hiring staff, we should be trying to do it with fewer teachers?

Of course, I am not saying that we should replace the teacher with the computer—all the research suggests that that would not be effective. I suspect that one reason is that role modelling is so important in learning.

What I am proposing is rebalancing away from staffing and towards technology, in an environment in which that technology is blended with traditional teaching and where it compensates the teacher for larger set sizes by making the teacher’s job significantly less stressful. This was precisely the argument that was made by the Nuffield Foundation in the 1970s, and I do not see that since then the evidence for such a position has done anything but grow even stronger.

If you want to know what using technology to scale education might look like, listen again to Professor Mazur, explaining his teaching method, which he calls peer instruction (listen through to 29:40. I haven’t got time to show you the full demonstration – but it is an inspirational video and I suggest that you watch the whole thing.

But however good this method is, it is one thing to use it in an experimental environment, or with a lecture hall full of Harvard undergraduates, it is quite another to roll it out across the education system. How would you do that?

I haven’t got time to go through this use-case in detail (see the Conclusion to “Why teachers don’t know best” for some more thoughts) but given that not every teacher is going to be a Professor Mazur, I suggest that you will need some specialist software, software that is associated by banks of appropriate questions, linked to analytics systems that can help teachers ask appropriate questions and can help to integrate this pedagogy with other classroom activities. Specialist software that works out of the box and does not require three days’ INSET and the whole Easter holidays preparation before you can use it effectively. And just as no-one has created any of Seymour Papert’s microworlds, no-one has created any software to support Professor Mazur’s peer instruction pedagogy or the associated question banks. No-one has created the technology and the technology matters.

Slide 26

slide 210

In view of this, I am not at all surprised by the summary of research from the CEM and the EEF.

Slide 27

slide 211

Nor am I surprised by their conclusion that:

It is the pedagogy of the application of technology in the classroom which is imporant: the how rather then the what.

I am not surprised partly because this is another aspect of the current ICT orthodoxy: “never mind the technology, where’s the learning”. You will hear it said again and again. To the ICT community, this is motherhood and apple pie stuff. But in my view, prioritising the how over the what, the practice over the technology—counts as another of our fundamental mistakes.

I’m not questioning the accuracy of research—I am questioning its significance. Because researchers…

Slide 28

slide 212

…are a bit like this dog. They look out of the back of the car and observe where we have been. They don’t look out of the front of the car and control where you are going to go next. They might have their views about that but their views are really no more valuable that anyone else’s—because driving the jeep is not their expertise.

Slide 29

slide 213

I should explain that this cartoon used to say “truth” and “reconciliation”, which is why you’ve got Desmond Tutu up there on the edge of the cliff. But now it says “what” and “how” instead.

The first problem with this formula is that we have very little leverage over the how, over the practice. We find it very difficult to change classroom practice. And to illustrate this point…

Video 1

I have stitched together a video from two talks given by Professor Coe and David Weston at the recent ResearchEd2013 (the original videos can be accessed here for David Weston and Professor Coe).

I think the final point made by David Weston is very perceptive—and it is connected to the answers given in response to Eric Mazur: teachers do not see what they do as a way of managing a process as efficiently as possible—they tend to see it as an expression of their personality and their values. And that is one reason why training is so ineffective and our attempts to improve classroom practice so unsuccessful.

That problem is compounded by the shortage of teachers, which I have already mentioned, and the consequent variability in the quality of teachers.

And finally, for all the stuff about how effective Assessment for Learning was in the trials, I think that doing it well is going to be difficult. You probably have me down as a bit of a teacher basher by now – but I’m not, really. Because I think that what we are asking teachers to do is next to impossible. If you have five classes in a day, with 30 students in each class, how can you realistically provide customised feedback and adaptive progression paths to every one of those 150 students, based on a detailed understanding of the particular cognitive difficulties that each of them faces? Its going to be hard enough to remember their names.

Slide 30

slide 213

So for all these reasons, there is very little that we can do to change the how. Training is ineffective. We have a limited supply of good teachers. And we are not really sure whether the practicality of providing personalised feedback and progression is sustainable anyway.

But there is another problem with this model of the what and the how, the technology and the learning, which is the perception that these are different things in the first place.

Slide 31

slide 216

Here is Diana Laurillard again, with a question at the annual conference of the Association for Learning Technology (listen through to 38:41).

In passing, note Professor Laurillard’s definition of “formal education”, which explains why education is subverted by theories of child-centred learning.

Second, note how she uses the word “technology”. Technology is not just a little box that sits in the cupboard, it’s not just the what. It is a process, it is education itself – The technology is the how.

Does Tescos buy the what—a new logistics system, say—at a cost of millions of pounds, install it, and then ask themselves, “I wonder how we are going to use this kit?”. Of course not. How the technology is going to be used is one of the first questions that is asked before the technology is even created. It is what a requirements analysis is all about.

Our trouble is that no one has ever done a real requirements analysis. Because no-one has ever built, at scale, our own, education-specific technology. The technology we do have does not match our requirement, and that…

Slide 32

slide 213

…is why we have this sort of collective split personality, why we live in a world where the what doesn’t do the how.

There was a lot of talk at ResearchEd2013 about the difficulty of connecting teachers with researchers. And the same problem was described in a nice little passage by Kim Taylor in 1970.

Slide 33

slide 217

The present gap between research and daily application is such that teachers generally turn for help to those in the same boat with them, all awash in a vast sea. Professors who grandly philosophize the aims of education flash over in aircraft; those who evaluate its practices nose beneath in submarines…but our main anxiety is to stay afloat and make some sort of progress, guided, if must be, by ancient stars. We wonder how others in similar straits are getting on. “Have you tried to row facing forwards?” we shout above the racket of the elements; “No”, comes the answer, “but paddling with your feet over the stern helps”…and we suspect that the academics, secure from the daily fret of wind and wave, have forgotten what it is like to feel a little seasick all the time.

What the guys in that boat need from the airman and the submariners is not another leaflet drop. Its an outboard motor and a compass, and a cuddy where they can sleep. We need to provide teachers with the tools of the trade that at present they lack…

Slide 34

Deconstructing ICT (7)

…the tools of the trade – the education-specific infrastructure and the education-specific content.

“Don’t we have these already?”, you might ask. No we don’t.

Let’s start with content. At EduPub, a conference Boston just last week, Pearson said that they now saw content as code not data. That is a geeky way of saying that content is activity, not information.

Slide 35

slide 218

The learning activity, with its iterative cycle of action – reaction – reflection – correction, that is the stuff of instruction, not information.

And this is another aspect of the current orthodoxy that is broken. Because we always talk about “learning content” as if it is information – Powerpoints, Word documents, YouTube videos. That’s not learning content – that’s information – and education is not a trip to the public library.

Because we have not developed our own appropriate, education-specific technologies, we have not realised the possibility of creating new forms of digital learning activity platform – the software – the “code” that Pearson was talking about last week in Boston – that will be the learning content of the future.

Once we have understood the nature of our content, only then we can specify our platform.

Slide 36

slide 220

The purpose of the learning platform in what I am calling here the “digital dialectic” is to manage and sequence learning activities.

At the end of every activity, the first thing that needs to be done is to assess the learning outcomes and update your profile of the student’s capabilities. From that point, you can offer useful criticism and  decide what, in the current context, the student should do next.

It is in managing progression that you provide personalised, adaptive instruction. Once the next activity has been selected, then you may need to consider grouping, because unless the activity is solitary, a lot of the character of that activity will depend on who you do it with; and finally you may need to feed some information into that activity – just the right information to be assimilated in the course of the activity, like drizzling oil into a mayonnaise.

This model of the digital dialectic requires a single platform, a single infrastructure – managing many different learning activities: creative, exploratory, social, simulated – or activities in the physical environment.

Data interoperability is absolutely fundamental to this model. It is essential that  data can flow freely between the different components.

As each activity is launched, it pulls data from the platform, giving it its instructional context – who are the learners? what are their roles? what are their objectives? what are the questions? what is the supporting information? what are the parameters – for this particular instance?

And as it finishes, the activity sends data back to the platform, representing the learning outcomes, the scores, the transcript of the interactions, the creative product, the reflections, the bookmarks and annotations.

Slide 37

slide 300

One of the key reasons that there isn’t a market for this sort of stuff is that there isn’t any data interoperability. Because interoperability is fundamental to technology markets in general. What sort of market would we have for electrical appliances without the three-pin-plug? What sort of market for recorded music without the long playing record, the CD, or the MPG file? What sort of railway without standard guage track? What sort of e-commerce without TCP/IP and HTML? Interoperability standards are the sine qua non of the market and for education technology, we do not have those standards.

I have a long history in this field. In the mid 1990s, I joined a BESA project called the Open Integrated Learning System – OILS. But that folded in 2000 when SCORM came out of the defence community in the US.

When Charles Clarke launched Curriculum Online idea 2002, I wrote a letter to him, pointing out that he needed to do something about interoperability if he was going to get any content that did anything useful. I was invited onto Curriculum Online’s Technical Standards Working Group and then they set up a Learning Platform Stakeholders Group, which met for 2 years, from 2003 into the autumn of 2005. But just as the LPSG was about to publish the Learning Platform Conformance Regime, the man from Becta turned up and said, “we’re closing this effort down because we’re going to do something much better”.

What they did was run the 2006 Learning Services Framework procurement, which was a catastrophe.

Slide 38

slide 301

I became concerned, during 2006, that they were not going to enforce any of the interoperability standards that are fundamental to what a learning platform, what a learning infrastructure, is. So I asked for, and was given, a meeting with Stephen Crowne, the incoming Chief Executive of Becta, in October 2006, and Mr Crowne assured me that all the requirements for interoperability were being enforced. Just to prove the point, he gave me this book of test scripts, 300 pages of full of tests to ensure that candidate solutions conformed to a specification called QTI.

Now QTI is nothing very exciting. It is just a format for passing multiple choice tests around. What is more—the systems being procured did not have to mark the tests or provide any feedback—they just had to display the questions and let the users play with the buttons. But hey, after 10 years, at least it was a start!

It was only later that I was passed documents that Becta had sent to all the tenderers some time before my meeting with Stephen Crowne. Almost two months earlier, everyone had been told that they need not conform to QTI version 2.0. That was half the book gone.

And one month before that, everyone had been told—again in a confidential document that was not passed to anyone except the companies that were competing—that if they found any tests that they couldn’t pass, they were allowed re-write the tests.

So when I was given this book by way of assurance that Becta was enforcing the requirement for interoperability, it was a lie—a lie to save Becta’s face because Becta had rushed into something that they didn’t understand.

And at root, their misunderstanding was that they saw the learning platform, not as a infrastructure but as a one-stop-shop.

Slide 39

slide 302

In this Becta definition of what a learning platform is, the key word is “integrated”. If you buy a set of integrated tools, they may communicate with each other but they won’t communicate with anybody else. What you are buying is a closed system. And when Becta gave privileged access to the market to a few chosen companies, to supply that market with closed systems, they were not stimulating innovation – they were blocking innovation. And they were not building a market – they were destroying the market.

I made a fuss by lodging a complaint with the European Commission on the grounds that what had been advertised as mandatory criteria were in fact not mandatory criteria. And although the European Commission did not in the end investigate my complaint, on the grounds that they didn’t have the technical resources to do so, I think my campaign did help raise some questions about Becta’s competence.

Slide 40

slide 303

The Chief Executive’s report in the autumn of 2007 showed that Becta’s own approval rating, instead of rising to 70%, which was their target, fell from just under 50% to to under 20%.

And what was Becta’s reaction to this catastrophic collapse of confidence? You might have expected them to say “whoa, did we do something wrong?”. But they didn’t. They did two things, both of which are documented on this page of the Chief Executive’s report. First, they stopped monitoring their own approval rating. And second, they launched a PR campaign.

Slide 41

slide 304

And not only did they not acknowledge the car crash – they didn’t learn from it either. In 2010, they produced this justification for extending the procurement framework to include MIS systems, on the assumption that this would liberalise the market for MIS.

The analysis is deeply flawed. For example, it dismisses the idea that schools should be encouraged to buy their own MIS systems on the grounds that such a decision would take secondary schools an average of 19 man days. A straw poll conducted for me by the Specialist Schools and Academies Trust showed that in practice it took a maximum of 2 days (See my Stop the IMLS Framework for more analysis of this report).

And that is just the beginning. But it was enough to make the case for the IMLS Framework, which went ahead under the auspices of the DfE’s procurement division, and even though hardly anyone is buying off it, it still defines the DfE’s formal position.

Which explains why we don’t have a market for education technology.

Slide 42

slide 400

So what is there that we can do?

Slide 43

slide 401

First, let me summarise my position so far.

If you accept those premises, then it follows that your role as procurers of technology is very important. You can’t blame the industry if they find that they are required to pitch for some godforsaken framework. They have to make their living and from the point of view of a company that wants to do well, the customer is always right.

And the customer, now days, is you. That means that collectively, you are in control. You don’t have to innovate. You don’t have to write the technical specifications. But you do have to ask for the right things. And pay for them.

Tescos doesn’t get its logistic system as a free download from HomeMadeSoftware.com—and unless you think that transforming the quality of education is so very much less complicated than selling cabbages, then neither should you.

Slide 44

slide 402

You should ask for software that supports activity-driven pedagogy in the classroom.

You should ask for disaggregated software that supports external launch. That means transparent single sign on in a manner that allows the teacher to click a button and have the whole class, immediately, doing exactly the activity you have assigned to them. No manual logon, no menu to navigate.

You should ask for software that returns performance data to common mark books. Until you have all your data in one place, no-one is going to be able to do any halfway decent learning analytics or any sensible progression management.

You should ask for software that automatically sends and receives student artefacts – by that I mean things they have created, be it an essay, an equation, a drawing, a solution. No one is going to make e-portfolios work until we have interoperable creative tools.

You should ask for classroom response systems that automatically interface to any third-party software. How can anyone develop software to help you implement Professor Mazur’s peer mentoring techniques, when all your clickers that you bought from company x only interface to software provided by company x?

Slide 45

slide 403

So it follows that the things that you should avoid are one-stop-shops and proprietary bundles. Just because you want product A from a particular company, doesn’t mean that you should allow yourself to be forced to buy products B, C, D and E from the same company. Just say no.

Avoid anyone trying to tell you their product is interoperable when that means transferring data manually with CSV files or spreadsheets, or requiring manual upload. Good technology is all about automation and consistency, it is about saving work and not creating it. As far as getting even the best software used, the atmosphere in schools is about as toxic as in outer space. If there is any reason at all for something not to happen, it won’t.

The same argument applies to the need for training. Any requirement for significant levels of training is a symptom of bad design. I don’t need to be trained before I can use my iPhone. Training can only be justified for intermediate users who have already bought into the concept, who want to optimize their use of a product.

Avoid OJEU frameworks – they don’t improve the quality, they reduce transparency, and they restrict the market.

Don’t stick with your incumbent provider without looking around to see if there is anything better out there. And don’t take advice uncritically from people giving talks (or indeed, from people writing blogs). Most of it is wrong.

Slide 46

slide 404

Finally, if you ever get a chance, what should you tell the DfE that it could do that would help?

Sebastian James was asked by Michael Gove to write a Review of Education Capital, essentially to justify the closure of the BSF programme. When it came to technology, James recommended that the government should set up an “online price comparison catalogue”, which would allow what he called virtual aggregation. By achieving price transparency, you could be sure you were getting a good deal, but you could still buy exactly what you wanted.

The Education Group in the IT Trade Association, Intellect is advocating a very similar concept – what we are calling a “G-Cloud for Education”. G-Cloud, if you haven’t come across it, is the online catalogue for online services for government. So this would be a sort of online product catalogue, a little like Curriculum Online, but better. I think it would be hugely beneficial, providing at least three key benefits.

  • it could ensure price transparency, which is the benefit that James was focusing on;
  • it could provide a platform for user reviews and professional reviews, helping  us to identify “what works” and what doesn’t;
  • thirdly, it could provide a platform for the certification of products against industry interoperability standards, so that as a buyer you could know that if you bought product x and product y, they would work together according to choreography z. And not only would those certifications provide transparency to you, the user, but they would also provide the incentives to industry to produce the effective interoperability standards that are required if any of this is going to work.

That is what the Department for Education should almost certainly do. Judging on previous form, they almost certainly won’t. But even if we wake up tomorrow, next week, next year and still find that there is no G-Cloud, or any other kind of useful market infrastructure, I hope you will still take away from my talk two things. First, the fact that, contrary to everything you will be told by almost everyone who claims to know anything about ICT, the technology really matters. And second, that the only way we are going to get the right sort of technology, the education-specific technology that is going to make the difference, is if you, the buyers, ask for it.

13 thoughts on “Its the technology, stupid!

  1. [ Refers to text at https://edtechnow.net/2013/11/10/wheel/#_cref-5077-a ]

    Oh dear:

    “But because he ducks the question, we can be fairly sure that the schools that follow the Heppell way do not do better in the tests. So if in your school you are interested in doing better in the tests, then perhaps you should bear that in mind before you listen to what Professor Heppell says”
    You will be comforted (and perhaps apologetic) to hear that the schools who listen to me – and I am always careful to say that there is no one right way of course – do get dramatically better tests results. Tests matter, kids only get one chance at being kids and as we wait for the tests to improve they need to pass the ones we have and I’m proud of how exceptionally they do – as I’ve been hearing once again from schools tonight. I’ve never suggested otherwise.

    “He is clearly wrong in his prediction – the tests are not ebbing away”
    They are indeed. or example from 2014 PISA, with all its faults, will be introducing Collaborative Problem Solving. Exactly as I predicted.

    And your list of “list all the things that Professor Heppell stands for” is not a list of things I stand for, or recognise. Your “inventions” are indeed often the polar opposite of what I stand for. For example I am regularly abusive about the use of the term “21st century” anything – which since we are already a good many years into the 21st C seems to be a bit late, to say the least.

    • Stephen,

      Thank you, first, for replying. I think that it is only through discussion and debate that progress can be made on these issues.

      At the end of my reply, I want to react to something that you said in one of your recent videos, with which I strongly agree. But first, I want to react to three points in your reply, with which you will not be surprised to hear that I do not agree.

      1. You say that “the schools who listen to me…do get dramatically better tests results”.

      If your schools do get better test results *as a result of adopting your teaching strategies* (many of which are associated with particular applications of technology), then:

      (a) why did you not take the opportunity to say so on the BBC Radio 4 Today programme, when asked precisely this question?

      (b) why do you do not produce the evidence to challenge the conclusion of the CEM that “the experimental evidence does not offer a convincing case for the general impact of digital technology on learning outcomes”?

      The core role of academia, after all, is to establish the truth on the basis of evidence and argument. It seems to me to be a shame that you are prepared to be so emphatic in your opinions when you do not produce any supporting evidence. A slightly more tolerant attitude to those who dare to challenge you might help too.

      2. That I critique a representation of your position that is based on my own inventions, particularly with regard to your support for “21st century learning”.

      My critique of your position is based on an analysis of the BBC’s recording of your 2008 Radio 4 interview. This is clearly not an invention.

      It is in the following slide that try and summarise your position. The purpose of this slide is not to provide the material for a second critique, but to make the point that your emphasis on “learning” rather than “teaching” or “education” is a characteristic of a much wider intellectual doctrine of constructivism.

      I am surprised that you find this list of bullet points so objectionable. It was not intended to be so and if any of them are wrong, I will be happy to correct it. If the overall impression is misleading, I would not hesitate to offer the apology that you seem to think you are owed.

      But the one point that you choose as something that is the “polar opposite” of what you stand for is “’21st century’ anything”. How strange, then, that you should have spent so much time over the last 10 years talking about 21st century this and that:

      On your blog at http://www.heppell.net/weblog/stephen/ (which runs until 2007) you say:

      * “most peple seem to get into a bit of a mess when it comes to assessmnt for 21st century learning. I’ve been battling away, making quite useful prgress I think, to stop assessment being the barrier that it has so often been.”

      * “Information and Technology [without the Communication] were simply not enough for learning in the 20th century, let alone the 21st”

      * “As we look at ICT in the 21st century it is clear that shared community spaces and inter-group communications are a massive part of what excites young people”

      * “Our star sportspersons, from Ellen MacArthur via Nigel Mansell back to Seb Coe, are typically individual stars. This jars with the 21st century”

      * “”Death to coursework” they cry, brandishing their shredders. But of course, it’s already the 21st century and we can’t turn off progress.”

      * “But the wait is over; schools have decided that anyway, in the 21st century, they should simply get on with it and leave strategy, policy and speeches struggling (and failing) to keep up”.

      * At http://www.slv.vic.gov.au/audio-video/stephen-heppell-21st-century-learning is a recording of a talk you gave in 2008 in Victoria: “Stephen Heppell on 21st-century learning”.

      * The 2009 YouTube video at http://www.youtube.com/watch?v=-YQ45v_hceE is entitled “Twenty first century learning”, carries the description, “Stephen Heppell talks about learning in the 21st century” and starts with you saying “Twenty-first century learners really need to be ambitious”.

      All of this from the man who says he is “regularly abusive about the use of the term “21st century” anything”! I couldn’t find any example of you abusing “21st century learning” – so perhaps you can help me out and give me a link?

      The quotes above make the general argument that things must change merely *because* we are now in the 21st century – a fallacy widely recognised as the “argumentum ad novitatem”. In your reply to my post, you say that “since we are already a good many years into the 21st C [talking of 21st C anything] seems to be a bit late, to say the least”. But this is not an argument against argumentum ad novitatem – rather the opposite. You seem to be saying that the 21st century is old hat. What a shame that we have to wait for so long before we can start talking about “twenty-second century learning”.

      3. You say that “They [the tests] are indeed [ebbing away]. or example from 2014 PISA, with all its faults, will be introducing Collaborative Problem Solving. Exactly as I predicted”.

      Interesting that the only way in which you can prove the truth of your last prediction is by making another prediction. And of course, silence any evidence-based debate, because how can anyone contest your interpretation of a report that has not yet been published?

      It is still worth making two observations about PISA.

      a) PISA is a research study and “Introducing Collaborative Problem Solving” is not within its power. It may of course *report* on the effectiveness or extent with which “Collaborative Problem Solving” is used as an assessment strategy. Let us wait and see what it says before any of us make any presumptions.

      b) I think it is significant that you see the introduction of “collaborative problem solving” into the tests as evidence that “the tests are ebbing away”. That is presumably because the tests are about individual ability and you see CPS as a measure of group capability.

      Indeed, I would see any move to assess the performance of transitory groups as highly damaging to any assessment system. It is the individual who is going to apply to university or a job, not the group. And if the assessment of the individual is going to be based on the performance of the group, this strikes me as enormously problematic: unfair to the individual who find themselves in a poor group and damaging to social mobility.

      But, judging from their slide show at http://www.slideshare.net/OECDEDU/oecd-international-assessment-of-problem-solving-skills-12797044, this is not what PISA is talking about at all. Their draft definition of collaborative problem solving on slide 8 is “the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution”. So this is still about assessing individual capability – and that does not provide any evidence at all that “the tests are ebbing away” – just that they are evolving.

      Still, it strikes me that assessing the performance of individuals in collaborative situations is a very challenging thing to do reliably and at scale.

      Finally, in researching this reply, I came across one recent statement that you made, Stephen, with which I completely agree. It comes in a 2012 video called, predictably, “A 21st Century Education” at http://newlearninginstitute.org/film-series/a-21st-century-education/technology-and-21st-century-learning.

      You say:

      To get a first class honours degree as a student in the 1950s, I’d have had to produce something that would astonish my professors. They’d look at my script and say ‘Cor, look at this! Have you seen what he’s put? Amazing stuff! They’d be surprised by my answer. Now, to get a first class honours degree, I have to produce the least surprising script. I produce the script that’s been anticipated absolutely by everybody else – that fits the mark scheme exactly. So in 50 years, we’ve turned learning around from something that valued ingenuity and creativity and mutuality [I’m not so sure about that one] and imagination to something that rewards conformity and certainty.

      With only the one noted caveat, I agree – and I think that the move to increasingly formulaic assessments became particularly problematic in the 2000s. So our reference point for an assessment system that really worked, one that rewarded creativity and insight, was the 1950s? Maybe you have more in common with Michael Gove that you’ve been letting on?

      And you might also agree with me (since I think it follows from this same position) that the problem we face is not that our traditional, enlightenment view of education was fundamentally flawed, but that we have not managed to scale it very effectively, as we introduced comprehensive education in an attempt to educate the many and not just the few.

      If people leading the attempt to apply technology to schools could focus on the problem of scaling education, rather than changing it into something completely different, then I think we would find that we would have, by now, been able to point to some rather more substantive evidence of improved performance in the tests. We might also have rather less formulaic tests.

      Crispin.

  2. In haste – I don’t begin to have the spare time you enjoy:

    • 21st C link? see for example from 2010: http://stephensphone.blogspot.com.au/2010/06/tomorrow-today.html
    • Radio 4? Media training is clear: focus on what you want to say and don’t be sidetracked – I did PD for the R4 Today team – there is no way something as complex could be covered in a single sentence. So I didn’t.
    • a convincing case for the general impact of digital technology on learning outcomes? Making learning substantially better is not hard, but it is hugely complex and it certainly isn’t simply adding digital technology – although it is a significant ingredient.

    Nice to chat? it would be – come to BETT – but perhaps be a little less aggressively hostile?

    S

    • Thank you Stephen, I will take you up on your offer of a meeting at BETT and look forward to the discussion. I will tweet you during the show to arrange a mutually convenient time.

      Thank you too for the link to your blog, where you do indeed complain about the use of the phrase “21st Century Learning”. I acknowledge that this addresses a number of the examples that I cite, where your videos and talks may have been described by conference organisers rather than by yourself. But it is also true that you appeal to the 21st century in your own writings in a way that I think supports my point about the “argumentum ad novitatem”. Indeed, the post that you cite in your defence makes exactly the point that I was making in my previous comment about being impatient to get on to the next century/millennium. In short, I think the point I was making in the slide is fair, though perhaps if I had seen that post, I would have chosen a different phrase with which to summarise it. I will add a note to the text to this effect shortly.

      I entirely agree with you about the complexity of teaching and learning and the consequent need to avoid simplistic prescriptions.

      On the handling of short media interviews, I understand the problem but would stand by my portrayal of your position on summative assessment and expertise, which I think was at the heart of your responses in that interview, as well as being echoed in many other presentations and writings. At the same time, and as I also say in my previous comment, I agree with you on the dangers of formulaic assessment and the taking of snapshots. I believe in the potential for technology to support more reliable, “longitudinal” assessments, which nevertheless avoid the problems associated with much traditional coursework. I would also be in favour of tracking a wider range of capabilities (including social and teamwork abilities), in ways that nevertheless make rigorous judgments of these capabilities and do not fudge deficits, where they are judged to exist.

      With reference to my “aggressive hostility”, I think that you will find that I am very pleased to have a constructive and courteous (though probably still robust) debate, in which all participants are willing to justify critically re-examine their own positions with an open mind, while respecting those held by others. The target of my hostility has not been the views of others but the failure among the ICT community to engage in such a substantive debate – with the result, I believe, that we have tended to adopt as gospel a set of simplistic and poorly thought-through doctrines. The consequences, I suggest, have been serious. The reputation of education technology has been damaged and the opportunities to bring genuine improvements to education have been squandered. Faced with what I am sure you will agree is such an important issue, and a community (like many) that marginalizes anyone who criticizes its core beliefs, aggressive hostility is often the only way to get important arguments heard at all. I think you will understand this point, as someone who admits to being capable of being abusive from time to time yourself.

      While I look forward to our meeting at BETT, a private chat is not enough to bring about a re-examination of publicly-held orthodoxies. It is the public debate that matters. I shall be participating in keynote debates on the future of MOOCs with the HE community at Online Educa Berlin in December, and with the Corporate Training community at Learning Technologies in London in January. From where I am standing, it seems to me that these communities are more willing to encourage constructive debate than my own “home” community, which is UK K-12/schools. But I hope that when we meet at BETT, we may be able to discuss how such a situation may be remedied.

      Crispin.

  3. The voice rec bit was Orange’s Wildfire, the rest is of course lost with the QCA – I still have all the specs of course, and the chap who did the hard work is still about in NZ. And the big mainframe we needed then has been long since overtaken in power by desktops… chat at BETT

    • Stephen, I think it is an interesting case study and I might write up my thoughts at greater length over Christmas.

      There is quite a lot in the eVIVA report that I am not convinced by: the rigour of the methodology (which is only important if you place too much weight on the conclusions), its constructivist hinterland, its particular take on ipsative assessment (I will try and get hold of Mabry), or by its use of criterion referencing, which I shall be talking about in Berlin in my talk on “Measuring capability”. I think “I can” statements tend to be reductive representations of the aims of education and the job of relating the abstract learning objective with the concrete instantiation is too difficult for the student to handle – it’s hard enough for the teacher. Which raises questions about exactly what is meant by meta-cognition.

      But I certainly agree that there is considerable value in e-portfolio and in reflective learning & peer review, both so far under-exploited. And I see the need both for capable e-portfolio systems, and for an assessment management platform that can support the iterative transactions involved in assignment, returning work, annotating work (by teachers and/or peers), and redrafting (again, not necessarily by the original author).

      I also think that the key to allowing any new approach like this to grow and improve is in the evolution of the software that encapsulates the ability to implement the method (as I argue in the post above). Once the eVIVA software was lost, so was any hope of making further progress. In this respect, I am sceptical about the hope so often placed in open source. We need a proper market that supports serious and sustainable (i.e. commercial) investment in serious software.

      The report is interesting in covering the resistance of teachers to the introduction new methods – this again backs up my argument (above) that training is not the lever for change. It sounds as if the project only worked at all, merely to the point of being implemented, because in year 2, researchers from Ultralab moved in and implemented it themselves by intervening directly in the teaching process.

      I think it is absolutely essential that any new innovation makes the life of teachers easier, from day 1. Otherwise, you might as well expect water to flow uphill. And the key to that is to design really user-friendly software with really friendly UIs that automates anything that can be automated – and to do that, you need good interoperability standards. In my book, transferring CSV files and manual uploading is a no-no. I see the first step to creating good e-portfolio systems and assignment management platforms is to start in a totally different place, by creating good, interoperable, creative tools, And that will not happen until we have good data interoperability specifications for education. Part of the problem, I think, with eVIVA was that the creative tool was just a voice recorder, which may seem easy to use from a technical point of view, but is really much too thin and unstructured a client to be used effectively at school level.

      But as you say, I look forward to discussing this further at BETT.

  4. A very thought provoking post and subsequent discussion. I’d like to add to this list of barriers to the development of effective educational technology; the lack of money in the schools system makes it a small pond for technology companies to fish in. Of course if the data interoperability solution was an international one then educational solutions developed in say the US could be sold profitably in many other jurisdictions. I won’t be holding my breath waiting for that to happen though.

    • Thanks very much for the comment, Alex.

      I completely agree with you that cost is an issue. One trouble about drawing the analogy with business ERP systems is the very substantial cost of such systems. But:

      (1) these are normally based on proprietary platforms (so we might expect one based on open standards might be a little cheaper);

      (2) costs of hardware and software continue to fall rapidly;

      (3) business processes vary widely between companies while education, I would maintain, follows broadly similar patterns everywhere – This last point is subject to the caveat that “content” in the sense of curriculum- and culture-dependent information does not travel well, though I think that software that encapsulates abstract pedagogy (which is the difficult bit) will travel well;

      (4) Until recently (move to a knowledge economy, challenge to the West by the BRICs, PISA) I don’t think that anyone really took education that seriously. It doesn’t help, in a democracy, that children don’t vote and many of our schools seem to be run more as a baby-sitting service. I think that this perception is changing and people are beginning to realise that education is of fundamental importance to our social cohesion and economic performance, as well as being a nice-to-have from the point of view of individual fulfillment.

      (5) When people say that serious ed-tech might be expensive, they should look at the cost of good staff. I do not argue that these are alternatives in absolute terms, but they may well be alternatives at the margin.

      On international standards, again, I agree that these is crucial. I chair the committee for edtech in BSI, which represents the UK in CEN (Europe) and ISO/IEC (International). There are lots of ineffective international specifications but no international standards. So my proposal is that by driving a local test-bed / incubator, we can create standards that work, promoting these rapidly to international level as soon as they have been proven.

      I don’t claim that any of it is easy. I am just arguing that it is worth trying.

      And thanks again for the comment. Crispin.

  5. Pingback: “We’ve always done it this way” | nick daniels / blog

  6. Pingback: LOGO: A Maker’s Coding Language – Nataly Moussa

Leave a Reply to Crispin Weston Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s