The consensus is that we should not mind the technology but that we should focus instead on the learning. The consensus is wrong.
This is the transcript of a presentation I gave at the EdExec conference, held by ICT Matters in London on 6 November 2013. The ostensible argument in my talk is that “procurement matters”, which I will admit, probably isn’t going to set your heart racing. But perhaps it should. The reason why procurement matters is that technology matters – and this is a point that much of the ICT community do not generally admit. Time and again, you hear the old saw being repeated, “never mind the technology, where’s the learning?” Most of my talk addressed addressed this point—and in doing so, I take on (as is my wont in this blog) a lot of shibboleths. I summarise some arguments with which those of you who have read previous posts may be familiar, and I also shadow some arguments that I will develop in greater detail in future. And I return to a promise that I made in my first post to this blog in January 2012, which is to discuss in rather more depth than I have done before why Becta’s approach to procurement was so lamentable.
You hear a lot of talk about “the wisdom of the crowd”. It underpins a lot of theories about Web 2.0 and social learning.
What people generally mean by this phrase is that the majority is right most of the time. But what is clear from that belief is that the majority has not read the book.
The Wisdom of Crowds, written by James Surowiecki in 2004, does not argue for consensus. Quite the opposite. It argues that crowds are very often foolish:
- when they indulge in group-think;
- when herd together for defensive reasons (“no-one ever got sacked for buying IBM”);
- when rumour “cascades”, through crowds that lack basic information;
- when innovation is required.
According to Surowiecki crowds are only wise in specific circumstances.The best collective decisions are the product of disagreement and contest, not consensus or compromise… The best way for a group to be smart is for each person in it to think and act as independently as possible.
It’s a paradox. As soon as the crowd thinks it’s wise, you can be sure that it isn’t.
Because it is my contention that we have all be far too subject to group-think, to herding, and the cascade of misinformation. It is my contention that almost everything we believe in the ICT community is wrong.
The misapprehension starts with the very words we use.
The report, Shut down or Restart?, published by the Royal Society in January 2012, recommends that we stop talking about “ICT” altogether on the grounds that the word has five different meanings. Most importantly…
it refers both to the use of ICT (to improve teaching and learning across the board and to the teaching of ICT as a subject.
We also use “ICT” to refer to the infrastructure – the computers, the peripherals, the network – all the stuff that we buy in order to deliver various applications.
– in the subject now called “Computing” – and to use ICT to improve education – in other words, for “education technology”.
…we have things called learning platforms and other stuff called learning content.
First, I am not so sure that there is significant difference between the learning platform and the MIS. I think we see them as distinct because they developed at different times – but as learning technology catches up with our administration technology, I think you will see these two infrastructures merge.
Second, why should the subject Computing get this privileged position, when there are so many other subjects that also use technology? Music uses synthesizers, Geography uses weather plotting software, DT may start using 3D printers.
And this has been part of the problem. David Brown, who gave the keynote this this morning, has no remit for education technology: he is a subject inspector. NAACE is a subject association. In every school up and down the country, the requirement for education technology has been seen through the lens of – and confused with – one particular subject on the curriculum.
Here, the “Primary hardware infrastructure” supports the learning platform (including the current MIS, the current learning platform, future systems like learning analytics and e-portfolio). The learning platform supports learning content. And learning content manages domain-specific software tools.
The primary infrastructure and the domain-specific technology are both generic. You can probably buy them on Amazon if you wanted. These are not our technologies.
The learning platform and the learning content are both education specific. If we don’t develop them, no-one else is going to do it for us. And that is where we have had a problem.
The Tooley Report was published by Ofsted in 1998. It analysed all papers that had been published over the previous four years in the four most frequently cited academic journals. A sub-sample of 40 papers was then selected for in depth analysis.
The report found that only 10% of the research was about teaching and learning. Only 15% of the research used robust, quantitative evidence. And only 36% was free from major methodological flaws.
If all those three variables are independent (which they are probably not) you would be able to say that the proportion of research that was about teaching and learning and that used robust quantitative evidence and that was free from methodological flaws, was about 0.5%. We can’t say that with certainty – but we can say that the proportion of educational research that is useful to our purpose is diminishingly small.
It is also clear from the Tooley Report that the process of peer review is dysfunctional and that there is almost no process of “contestation”, as Surowiecki put it. No-one tries to repeat other people’s trials, hardly anyone challenges other people’s assertions, most academics play a game of “academic Chinese whispers”. If someone else said it, particularly if that someone has a reputation, it must be true.
The result, as Tom Bennett has argued very effectively in his recent book, Teacher Proof, has been a long list of bogus educational theories, none of which are backed up by any significant evidence, many of which have been actively disproved, but most of which have been lapped up by what too often appears to be a gullible profession.
Its against that background that I want to talk about Professor Stephen Heppell, who has been an influential figure in the world of ICT. You could almost say that he has been its architect. He claims for himself the epithet of “the man who put the C into ICT”, having been an author of the influential 1997 Stevenson Report. He was called “Europe’s leading online education expert” by Microsoft; and “The most influential academic of recent years in the field of technology and education” by the DfE. He is giving one of the leading keynotes at the BETT Show next February.
Here is an extract from an interview he gave to BBC Radio 4’s Today programme in 2008, in response to the publication of the Rose Review (listen from 03:40 through 04:45).
The first thing to notice is that Professor Heppell doesn’t answer the question (the mark of a politician, you might say, not an academic). But because he ducks the question, we can be fairly sure that the schools that follow the Heppell way do not do better in the tests. So if in your school you are interested in doing better in the tests, then perhaps you should bear that in mind before you listen to what Professor Heppell says.
He is clearly wrong in his prediction—the tests are not ebbing away. But we shouldn’t put too much weight on that because it wasn’t really a prediction—it was a rhetorical trick, of a sort that Professor Heppell uses frequently. Instead of answering the question, instead of even arguing a case, instead of saying that he wants the tests to ebb away, he says that they are ebbing away. What he is really saying is that the question isn’t relevant any more, the matter’s been decided, the train is already leaving the station, so there’s no point in disagreeing, it’s game over. Except, of course, it isn’t.
But maybe, in spite of all the constraints of accountability and the pressure of the league tables, you still have some sympathy with what Professor Heppell is arguing; perhaps you have a sense the tests might indeed be impoverishing the quality of our education.
So before I address that concern, let me point out that both the assertions in the final paragraph are also untrue.
It is not true that we live in a world in which we have never met anything before. When I get ill, I go to the doctor, he tells me – if I take my medicine – how long it will take to get better, and he will tell it to me with a greater accuracy and reliability than would a non-expert, precisely because he has seen the same thing many times before. The ability to predict the future is the essential capability that comes with expertise. You want to know if a particular bridge will bear the weight of a particular lorry in particular wind conditions? Ask a civil engineer.
If you don’t believe in the possibility of predicting the future, then you don’t believe in the existence of expertise. And that is a strange position for a professor of education, given that nurturing expertise is what education is meant to be all about.
It is also not true that our tests have necessarily been based on a model of ‘have you met this before’ – but that is a point I will come to in a second.[Comment by Stephen Heppell, who disagrees that this slide represents his position fairly]
So if you were to list all the things that Professor Heppell stands for, you will find that quite a lot of also appear on the list of Tom Bennett’s list of barmy theories for which there is no evidential support.
…that they are a fairly typical expression of the more general theory of constructivism. This states that
- learners construct their own knowledge;
- through a process of active discovery;
- and that, as argued by Jean Piaget, this is done in response to an internal developmental route-map;
- and that knowledge, being the private construct of the individual, is not true in any objective sense.
I agree that point 2 has merit. But points 1, 3 and 4 all imply that self-expression is more important than objective truth—and that position shows a shaky grasp of both psychology and epistemology.
Nor does the promotion of self-directed learning necessarily increase the amount of interactivity experienced by the learner—points 2 and 3 often seem to work against each other—and not only because the self-directed learner might prefer to slouch on the sofa with a large packet of crisps, but because the self-directed learner does not know what they would best be doing, or how to do it.
In her recent book, Teaching as a Design Science, Professor Diana Laurillard from the Institute of Education, comments on what tends to happen when children are released onto the internet in the expectation that they will become independent learners (see slide above).
And Professor Laurillard’s conclusion is confirmed by one of the early Ofsted reports on ICT in education (see slide above). “Internet research”, which sounds so advanced and grown up, so compatible with theories of independent learning, is too often in practice an easy homework that requires no preparation, no marking, little work and even less thought.
Don’t you find it surprising that a hall full of teachers cannot answer what must be one of the simplest and most important questions about their own professional practice? “Challenge” and “feedback” are warm, I suppose, but neither gives an accurate answer to the question. And I find “personality” and “relationships” very revealing—because that is how I think most teachers understand their job (see my recent Why teachers don’t know best for the full argument). As long as they can motivate, as long as they can inspire, as long as they can get the children to like them, the process of teaching isn’t really so important. Teachers see teaching very much as a personal craft, and not, as Diana Laurillard calls it in her book, a “design science” like engineering.
Another misconception is this business of assessment, which is commonly said either to be a distraction, or reductive. Don’t, whatever you do, teach to the test.
But as Professor Mazur explains, practice is an essential part of effective teaching. And once the learner is practicing, why would I, as the teacher, not want to track what is going on, so that I can offer criticism and support, so that I can suggest what the student should do next? And once my student is practicing something, and I monitoring that practice, how am I not assessing them? Assessment is not, as most teachers think, a distraction from teaching. It is its very essence. Which of course does not excuse the damaging effect on education of badly conceived tests.
And Professor Mazur also explains why Professor Heppell is wrong to say that tests are inevitably built on a model of ‘have you seen this before’. Because the whole point of practice, the whole point of good assessment, is to ask the student to apply the abstract knowledge and skills that they have acquired to a new context. Only then do you know that they have understood and not just remembered.
This is why Seymour Papert, who wrote Mindstorms in 1980, replaced the term “constructivism” with “constructionism”. The difference lies in what you are constructing. In constructivism, you are constructing knowledge. In constructionism, you are constructing a robot, a computer program, a song, whatever. As you do that, you find that you are also constructing your “intellectual structures” (just as Professor Mazur explains the process). But there is a big difference between constructing intellectual structures (or “belief” for short) and constructing knowledge. Constructionism takes the interactivity and discovery from Papert, but rejects the child-centred relativism.
Papert’s observed that children learnt rapidly when they had the physical world to explore and bump up against—but this sort of environment was not available to them when they were tackling more abstract learning later on. So his proposal was to create “microworlds”, simulated, concrete, explorable environments that allowed older children to learn about abstract concepts in the same way that younger children learn about physical world.
Though it is a compelling proposal, thirty years later we can say that it hasn’t really worked. One very important reason is that the opportunity to develop such “microworlds” has, in the words of Diana Laurillard, been “tragically under-exploited”. No one developed the technology—and technology matters.
Papert’s microworlds proposal is one specific example of a general approach to activity based learning—an approach in which the learner performs an action and some “other” provides a reaction. The “other” could be a physical object, an environment, a peer, a teacher, or a computer program (see my In the beginning was the conversation, for the full argument here).
…the Socratic dialectic. Socrates used to sit down in the market square with his pupils and engage them with questions.
It is a model that has passed the test of time. In the nineteenth century President Garfield of America described his ideal learning environment as:a log hut, with only a simple bench, Mark Hopkins on one end and I on the other.
And the same model is still used in the tutorial system at Oxford and Cambridge. It is the pedagogical gold standard, great in every way, except one: it doesn’t scale. Track a pupil through the school day and find out how much time they spend in meaningful conversation about their work with their teachers. However much we might try to encourage “dialogic” teaching, there are not enough Socrateses to go around. And there never will be.
This problem was spotted in 1970 by Kim Taylor, Director at the Nuffield Institute. The problem that Taylor saw with comprehensive education had nothing to do with selection. The problem was one of scalability.
The problem with the undersupply of properly qualified teachers is of course a particular concern in shortage subjects—Computing is the case in point at the moment. But, given time, you can probably find someone to do the job, if you are prepared to accept a high level of variability in the quality of your teachers.
Professor Dylan Wiliam, who is a researcher who has developed a reputation for looking closely at the data, suggests that the research shows that there is a variability of learning outcome, depending on which teacher you get, with a factor of 4. I am not sure that we would find it acceptable if we found such inconsistent outcomes in the death rates associated with different surgeons in the NHS.
The Teacher Toolkit is produced by Durham University and sponsored by the Education Endowment Foundation. The strategies which are rated as most expensive and least effective include smaller classes and teaching assistants. In both those cases, that means hiring more staff is not particularly effective. What is most effective and least expensive are things like feedback, meta-cognition (which is understanding the learning process) and peer tutoring—all of these represent the sort of interactive, practice-based pedagogies that I have been discussing.
If hiring staff seems to be so ineffective and in many subjects so problematic, isn’t it fairly obvious that we should be trying to do more of the stuff in this top left hand corner, and that in view of the expense, the ineffectiveness and the difficulty of hiring staff, we should be trying to do it with fewer teachers?
Of course, I am not saying that we should replace the teacher with the computer—all the research suggests that that would not be effective. I suspect that one reason is that role modelling is so important in learning.
What I am proposing is rebalancing away from staffing and towards technology, in an environment in which that technology is blended with traditional teaching and where it compensates the teacher for larger set sizes by making the teacher’s job significantly less stressful. This was precisely the argument that was made by the Nuffield Foundation in the 1970s, and I do not see that since then the evidence for such a position has done anything but grow even stronger.
If you want to know what using technology to scale education might look like, listen again to Professor Mazur, explaining his teaching method, which he calls peer instruction (listen through to 29:40. I haven’t got time to show you the full demonstration – but it is an inspirational video and I suggest that you watch the whole thing.
But however good this method is, it is one thing to use it in an experimental environment, or with a lecture hall full of Harvard undergraduates, it is quite another to roll it out across the education system. How would you do that?
I haven’t got time to go through this use-case in detail (see the Conclusion to “Why teachers don’t know best” for some more thoughts) but given that not every teacher is going to be a Professor Mazur, I suggest that you will need some specialist software, software that is associated by banks of appropriate questions, linked to analytics systems that can help teachers ask appropriate questions and can help to integrate this pedagogy with other classroom activities. Specialist software that works out of the box and does not require three days’ INSET and the whole Easter holidays preparation before you can use it effectively. And just as no-one has created any of Seymour Papert’s microworlds, no-one has created any software to support Professor Mazur’s peer instruction pedagogy or the associated question banks. No-one has created the technology and the technology matters.
Nor am I surprised by their conclusion that:It is the pedagogy of the application of technology in the classroom which is imporant: the how rather then the what.
I am not surprised partly because this is another aspect of the current ICT orthodoxy: “never mind the technology, where’s the learning”. You will hear it said again and again. To the ICT community, this is motherhood and apple pie stuff. But in my view, prioritising the how over the what, the practice over the technology—counts as another of our fundamental mistakes.
…are a bit like this dog. They look out of the back of the car and observe where we have been. They don’t look out of the front of the car and control where you are going to go next. They might have their views about that but their views are really no more valuable that anyone else’s—because driving the jeep is not their expertise.
I should explain that this cartoon used to say “truth” and “reconciliation”, which is why you’ve got Desmond Tutu up there on the edge of the cliff. But now it says “what” and “how” instead.
I think the final point made by David Weston is very perceptive—and it is connected to the answers given in response to Eric Mazur: teachers do not see what they do as a way of managing a process as efficiently as possible—they tend to see it as an expression of their personality and their values. And that is one reason why training is so ineffective and our attempts to improve classroom practice so unsuccessful.
That problem is compounded by the shortage of teachers, which I have already mentioned, and the consequent variability in the quality of teachers.
And finally, for all the stuff about how effective Assessment for Learning was in the trials, I think that doing it well is going to be difficult. You probably have me down as a bit of a teacher basher by now – but I’m not, really. Because I think that what we are asking teachers to do is next to impossible. If you have five classes in a day, with 30 students in each class, how can you realistically provide customised feedback and adaptive progression paths to every one of those 150 students, based on a detailed understanding of the particular cognitive difficulties that each of them faces? Its going to be hard enough to remember their names.
So for all these reasons, there is very little that we can do to change the how. Training is ineffective. We have a limited supply of good teachers. And we are not really sure whether the practicality of providing personalised feedback and progression is sustainable anyway.
In passing, note Professor Laurillard’s definition of “formal education”, which explains why education is subverted by theories of child-centred learning.
Second, note how she uses the word “technology”. Technology is not just a little box that sits in the cupboard, it’s not just the what. It is a process, it is education itself – The technology is the how.
Does Tescos buy the what—a new logistics system, say—at a cost of millions of pounds, install it, and then ask themselves, “I wonder how we are going to use this kit?”. Of course not. How the technology is going to be used is one of the first questions that is asked before the technology is even created. It is what a requirements analysis is all about.
Our trouble is that no one has ever done a real requirements analysis. Because no-one has ever built, at scale, our own, education-specific technology. The technology we do have does not match our requirement, and that…
…is why we have this sort of collective split personality, why we live in a world where the what doesn’t do the how.The present gap between research and daily application is such that teachers generally turn for help to those in the same boat with them, all awash in a vast sea. Professors who grandly philosophize the aims of education flash over in aircraft; those who evaluate its practices nose beneath in submarines…but our main anxiety is to stay afloat and make some sort of progress, guided, if must be, by ancient stars. We wonder how others in similar straits are getting on. “Have you tried to row facing forwards?” we shout above the racket of the elements; “No”, comes the answer, “but paddling with your feet over the stern helps”…and we suspect that the academics, secure from the daily fret of wind and wave, have forgotten what it is like to feel a little seasick all the time.
What the guys in that boat need from the airman and the submariners is not another leaflet drop. Its an outboard motor and a compass, and a cuddy where they can sleep. We need to provide teachers with the tools of the trade that at present they lack…
…the tools of the trade – the education-specific infrastructure and the education-specific content.
“Don’t we have these already?”, you might ask. No we don’t.
Let’s start with content. At EduPub, a conference Boston just last week, Pearson said that they now saw content as code not data. That is a geeky way of saying that content is activity, not information.
The learning activity, with its iterative cycle of action – reaction – reflection – correction, that is the stuff of instruction, not information.
And this is another aspect of the current orthodoxy that is broken. Because we always talk about “learning content” as if it is information – Powerpoints, Word documents, YouTube videos. That’s not learning content – that’s information – and education is not a trip to the public library.
Because we have not developed our own appropriate, education-specific technologies, we have not realised the possibility of creating new forms of digital learning activity platform – the software – the “code” that Pearson was talking about last week in Boston – that will be the learning content of the future.
The purpose of the learning platform in what I am calling here the “digital dialectic” is to manage and sequence learning activities.
At the end of every activity, the first thing that needs to be done is to assess the learning outcomes and update your profile of the student’s capabilities. From that point, you can offer useful criticism and decide what, in the current context, the student should do next.
It is in managing progression that you provide personalised, adaptive instruction. Once the next activity has been selected, then you may need to consider grouping, because unless the activity is solitary, a lot of the character of that activity will depend on who you do it with; and finally you may need to feed some information into that activity – just the right information to be assimilated in the course of the activity, like drizzling oil into a mayonnaise.
This model of the digital dialectic requires a single platform, a single infrastructure – managing many different learning activities: creative, exploratory, social, simulated – or activities in the physical environment.
Data interoperability is absolutely fundamental to this model. It is essential that data can flow freely between the different components.
As each activity is launched, it pulls data from the platform, giving it its instructional context – who are the learners? what are their roles? what are their objectives? what are the questions? what is the supporting information? what are the parameters – for this particular instance?
And as it finishes, the activity sends data back to the platform, representing the learning outcomes, the scores, the transcript of the interactions, the creative product, the reflections, the bookmarks and annotations.
One of the key reasons that there isn’t a market for this sort of stuff is that there isn’t any data interoperability. Because interoperability is fundamental to technology markets in general. What sort of market would we have for electrical appliances without the three-pin-plug? What sort of market for recorded music without the long playing record, the CD, or the MPG file? What sort of railway without standard guage track? What sort of e-commerce without TCP/IP and HTML? Interoperability standards are the sine qua non of the market and for education technology, we do not have those standards.
I have a long history in this field. In the mid 1990s, I joined a BESA project called the Open Integrated Learning System – OILS. But that folded in 2000 when SCORM came out of the defence community in the US.
When Charles Clarke launched Curriculum Online idea 2002, I wrote a letter to him, pointing out that he needed to do something about interoperability if he was going to get any content that did anything useful. I was invited onto Curriculum Online’s Technical Standards Working Group and then they set up a Learning Platform Stakeholders Group, which met for 2 years, from 2003 into the autumn of 2005. But just as the LPSG was about to publish the Learning Platform Conformance Regime, the man from Becta turned up and said, “we’re closing this effort down because we’re going to do something much better”.
I became concerned, during 2006, that they were not going to enforce any of the interoperability standards that are fundamental to what a learning platform, what a learning infrastructure, is. So I asked for, and was given, a meeting with Stephen Crowne, the incoming Chief Executive of Becta, in October 2006, and Mr Crowne assured me that all the requirements for interoperability were being enforced. Just to prove the point, he gave me this book of test scripts, 300 pages of full of tests to ensure that candidate solutions conformed to a specification called QTI.
Now QTI is nothing very exciting. It is just a format for passing multiple choice tests around. What is more—the systems being procured did not have to mark the tests or provide any feedback—they just had to display the questions and let the users play with the buttons. But hey, after 10 years, at least it was a start!
It was only later that I was passed documents that Becta had sent to all the tenderers some time before my meeting with Stephen Crowne. Almost two months earlier, everyone had been told that they need not conform to QTI version 2.0. That was half the book gone.
And one month before that, everyone had been told—again in a confidential document that was not passed to anyone except the companies that were competing—that if they found any tests that they couldn’t pass, they were allowed re-write the tests.
So when I was given this book by way of assurance that Becta was enforcing the requirement for interoperability, it was a lie—a lie to save Becta’s face because Becta had rushed into something that they didn’t understand.
In this Becta definition of what a learning platform is, the key word is “integrated”. If you buy a set of integrated tools, they may communicate with each other but they won’t communicate with anybody else. What you are buying is a closed system. And when Becta gave privileged access to the market to a few chosen companies, to supply that market with closed systems, they were not stimulating innovation – they were blocking innovation. And they were not building a market – they were destroying the market.
I made a fuss by lodging a complaint with the European Commission on the grounds that what had been advertised as mandatory criteria were in fact not mandatory criteria. And although the European Commission did not in the end investigate my complaint, on the grounds that they didn’t have the technical resources to do so, I think my campaign did help raise some questions about Becta’s competence.
The Chief Executive’s report in the autumn of 2007 showed that Becta’s own approval rating, instead of rising to 70%, which was their target, fell from just under 50% to to under 20%.
And what was Becta’s reaction to this catastrophic collapse of confidence? You might have expected them to say “whoa, did we do something wrong?”. But they didn’t. They did two things, both of which are documented on this page of the Chief Executive’s report. First, they stopped monitoring their own approval rating. And second, they launched a PR campaign.
And not only did they not acknowledge the car crash – they didn’t learn from it either. In 2010, they produced this justification for extending the procurement framework to include MIS systems, on the assumption that this would liberalise the market for MIS.
The analysis is deeply flawed. For example, it dismisses the idea that schools should be encouraged to buy their own MIS systems on the grounds that such a decision would take secondary schools an average of 19 man days. A straw poll conducted for me by the Specialist Schools and Academies Trust showed that in practice it took a maximum of 2 days (See my Stop the IMLS Framework for more analysis of this report).
And that is just the beginning. But it was enough to make the case for the IMLS Framework, which went ahead under the auspices of the DfE’s procurement division, and even though hardly anyone is buying off it, it still defines the DfE’s formal position.
First, let me summarise my position so far.
If you accept those premises, then it follows that your role as procurers of technology is very important. You can’t blame the industry if they find that they are required to pitch for some godforsaken framework. They have to make their living and from the point of view of a company that wants to do well, the customer is always right.
And the customer, now days, is you. That means that collectively, you are in control. You don’t have to innovate. You don’t have to write the technical specifications. But you do have to ask for the right things. And pay for them.
Tescos doesn’t get its logistic system as a free download from HomeMadeSoftware.com—and unless you think that transforming the quality of education is so very much less complicated than selling cabbages, then neither should you.
You should ask for software that supports activity-driven pedagogy in the classroom.
You should ask for disaggregated software that supports external launch. That means transparent single sign on in a manner that allows the teacher to click a button and have the whole class, immediately, doing exactly the activity you have assigned to them. No manual logon, no menu to navigate.
You should ask for software that returns performance data to common mark books. Until you have all your data in one place, no-one is going to be able to do any halfway decent learning analytics or any sensible progression management.
You should ask for software that automatically sends and receives student artefacts – by that I mean things they have created, be it an essay, an equation, a drawing, a solution. No one is going to make e-portfolios work until we have interoperable creative tools.
You should ask for classroom response systems that automatically interface to any third-party software. How can anyone develop software to help you implement Professor Mazur’s peer mentoring techniques, when all your clickers that you bought from company x only interface to software provided by company x?
So it follows that the things that you should avoid are one-stop-shops and proprietary bundles. Just because you want product A from a particular company, doesn’t mean that you should allow yourself to be forced to buy products B, C, D and E from the same company. Just say no.
Avoid anyone trying to tell you their product is interoperable when that means transferring data manually with CSV files or spreadsheets, or requiring manual upload. Good technology is all about automation and consistency, it is about saving work and not creating it. As far as getting even the best software used, the atmosphere in schools is about as toxic as in outer space. If there is any reason at all for something not to happen, it won’t.
The same argument applies to the need for training. Any requirement for significant levels of training is a symptom of bad design. I don’t need to be trained before I can use my iPhone. Training can only be justified for intermediate users who have already bought into the concept, who want to optimize their use of a product.
Avoid OJEU frameworks – they don’t improve the quality, they reduce transparency, and they restrict the market.
Don’t stick with your incumbent provider without looking around to see if there is anything better out there. And don’t take advice uncritically from people giving talks (or indeed, from people writing blogs). Most of it is wrong.
Finally, if you ever get a chance, what should you tell the DfE that it could do that would help?
Sebastian James was asked by Michael Gove to write a Review of Education Capital, essentially to justify the closure of the BSF programme. When it came to technology, James recommended that the government should set up an “online price comparison catalogue”, which would allow what he called virtual aggregation. By achieving price transparency, you could be sure you were getting a good deal, but you could still buy exactly what you wanted.
The Education Group in the IT Trade Association, Intellect is advocating a very similar concept – what we are calling a “G-Cloud for Education”. G-Cloud, if you haven’t come across it, is the online catalogue for online services for government. So this would be a sort of online product catalogue, a little like Curriculum Online, but better. I think it would be hugely beneficial, providing at least three key benefits.
- it could ensure price transparency, which is the benefit that James was focusing on;
- it could provide a platform for user reviews and professional reviews, helping us to identify “what works” and what doesn’t;
- thirdly, it could provide a platform for the certification of products against industry interoperability standards, so that as a buyer you could know that if you bought product x and product y, they would work together according to choreography z. And not only would those certifications provide transparency to you, the user, but they would also provide the incentives to industry to produce the effective interoperability standards that are required if any of this is going to work.
That is what the Department for Education should almost certainly do. Judging on previous form, they almost certainly won’t. But even if we wake up tomorrow, next week, next year and still find that there is no G-Cloud, or any other kind of useful market infrastructure, I hope you will still take away from my talk two things. First, the fact that, contrary to everything you will be told by almost everyone who claims to know anything about ICT, the technology really matters. And second, that the only way we are going to get the right sort of technology, the education-specific technology that is going to make the difference, is if you, the buyers, ask for it.