In the autumn of 2013, I welcomed the Coalition government’s revival of interest in ed-tech after four years of neglect (see “Land ho!” of December 2013). But the process of bringing the ship into port does not always run smoothly. If last January’s report from the Education Technology Action Group (ETAG) is of any value, it is only because it shows so clearly how muddled is the thinking of the ed-tech community in the UK.
I do not welcome the opportunity to produce another negative critique (indeed, I have hesitated for four months before doing so). I would much rather move on to make a positive contribution to a coherent discussion about effective ed-tech policy. But so long as a group such as ETAG, established with some fanfare by Ministers, produces such a poorly reasoned argument, there seems to be little option but to offer a rebuttal.
What I want to emphasise at the end of this piece, however, is that the failure of ETAG provides an important opportunity to put aside the muddled vision of technology in education that has dominated our discourse for the last 20 years. Following last week’s election, we have in 2015 the best opportunity since 1997 to make a fresh start and introduce a truly effective model of education technology into our schools.
The criticism of ETAG
I make my criticism of ETAG under three headings:
- that ETAG failed to answer the question it was asked;
- that ETAG failed to address the evidence;
- that ETAG was deficient in terms of argument and process.
That ETAG failed to answer the question it was asked
The ETAG remit is stated on Professor Stephen Heppell’s blog as being:
to best support the agile evolution of the FE, HE and schools sectors in anticipation of disruptive technology for the benefit of learners, employers & the UK economy…[and] to identify any barriers to the growth of innovative learning technology that have been put in place (inadvertently or otherwise) by the Governments, as well as thinking about ways that these barriers can be broken down.
In a letter in February, Minister Nick Boles simplified this remit to the following: ETAG’s job has been to advise Ministers on “how the education system might act to get the best out of technology”.
In her speech to BETT on 21 January, Nicky Morgan went further, suggesting some of the ways in which she hoped the question would be answered.
I look forward to studying the group’s report, which will be published today. But as I do so, I will be looking for ideas in a number of areas where I think technology can transform the educational landscape.
The first is accountability…
The second area I would like to look at is assessment and reporting…
Finally, I believe technology can play a critical role in helping to deliver one of my major priorities: reducing teacher workload.
Improving accountability, improving assessment and reporting and reducing teacher workload: these are just some of the obvious ways in which education might be able to “get the best out of technology”. But if you read the ETAG report to find a vision of how these benefits might be realized, you will be disappointed. While technology routinely saves work in almost every business sector to which it is applied, all that ETAG says about the effect of its recommendations is that workload is likely to increase, at least in the short term (page 29).
Teachers who move to online teaching will be aware of a significant, but only initial, increase in their workload, if they are setting out to make optimal use of the technology.
You would expect that the issue of how technology can help to improve assessment and accountability would be addressed in the section entitled “Assessment and accountability”. Again, you would be disappointed. The section starts by saying: (page 17).
Government policy, in the form of the National Curriculum and the strategic direction set for assessment and accountability, is a critical lever for influencing educational practice
and it continues to explain how assessment and accountability should be harnessed to enforce technology-based teaching on schools. But it says nothing about how technology can improve assessment and accountability.
Nor does the report propose any other ways in which education can “get the best out of technology”. Instead, it chose to address a completely different question—a question that it was never asked. The critical passage occurs on page 7:
Fundamentally, we concluded that the use of digital technology in education is not optional. Competence with digital technology to find information, to create, to critique and share knowledge, is an essential contemporary skill set.
This key passage expresses the report’s whole argument on why technology should be more widely used in education. It has nothing at all to do with education “getting the best out of technology”, nothing to do with the use of technology as a pedagogical and management tool to improve the quality of teaching and learning. It expresses its argument solely in terms of curriculum objectives: the technology-dependent skillset that it thinks every modern school child should learn. The question was about how we should teach: ETAG has given an answer which is about what we teach.
I have frequently questioned the value of what are generally called “twenty-first century skills”, but there is no point in renewing such a discussion here. All that matters is that ETAG’s fundamental conclusion, right or wrong, is simply irrelevant. A report which is supposedly about “education technology” does not use that term once, beyond the title page. A report which was meant to make recommendations on how technology could improve education instead focused exclusively on how education should spend more time teaching about technology. They addressed the wrong subject and answered the wrong question.
That ETAG failed to address the evidence
The evidence of the impact of technology
If the ETAG group had addressed the question it was asked, it would have quickly come against the difficulty that, although very large amounts of money have been spent on promoting technology in education, there is precious little evidence that it has had any impact on the quality of teaching and learning. The meta-analysis conducted by Professor Higgins for the Education Endowment Foundation summarises the evidence so (on page 3):
Taken together, the correlational and experimental evidence does not offer a convincing case for the general impact of digital technology on learning outcomes.
The recent NESTA report, Decoding Learning, draws a similar conclusion (on page 8):
Evidence of digital technologies producing real transformation in learning and teaching remains elusive.
Professor Angela MacFarlane (a member of ETAG), in her most recent book, Authentic Learning for the Digital Generation, draws the same conclusion again (in Chapter 1, What to do with the technology once you have it?):
There is a long history of providing digital technologies to schools and looking for impact through improvements in standardised test scores. Evidence of success measured in this way is scarce and, where it does exit it is restricted to small-scale projects with high levels of technology provision and/or staffing that are not sustainable in the long term and certainly not scalable to whole-school systems.
Any credible answer to the question about how technology can improve teaching and learning cannot avoid starting by asking why previous efforts to use education technology have been so unsuccessful.
The ETAG report does not make any reference to this conspicuous lack of evidence. Instead, it dismisses the importance of evidence itself, stating on page 11 that:
Evidence is a problematic concept when thinking about digital technology in education because:
- it is very difficult to show a clear causal relationship between a single variable (such as the introduction of phones) and learning outcomes…
- digital technology has changed the nature of disciplines outside school and thus should also impact on the curriculum inside school…[but] the existing metrics in education…have not been changed enough to reflect this…
- the effectiveness of digital technology to enhance learning is dependent upon how it is used, and very subtle differences in the way it is implemented can have large impacts, thus it is difficult to make valid generalisations between different implementations/contexts.
As already discussed, the second of these points is irrelevant. It addresses the aims of education and not the means. It may or may not be true that the nature of academic disciplines has fundamentally changed (personally, I think that it is not true) but it should be possible to use a pedagogical tool to improve the teaching of Ancient Greek, exactly as the ancient Greeks understood it, just as much as to improve the teaching of the most cutting-edge version of Computer Science. Whether you would want to teach Ancient Greek in preference to Computer Science is a different question.
The first and the third bullet points make exactly the same argument. They both claim is that it is “very difficult” to show a causal relationship between a single variable (such as the introduction of phones) and subsequent learning outcomes because there are so many other variables that are likely to influence the outcomes.
The report does not enter into any further discussion about why it is not possible to isolate these other, confounding variables. By suggesting that “evidence is a problematic concept”, it gives a heavy hint that the problem is one of fundamental principle and is therefore insoluble.
Some people might assume from this language that quantitative evidence is intrinsically unsuited to the investigation of causal relationships because, as it is often said, “Correlation does not imply causation”. As I have explained on this blog before, this popular aphorism is simply untrue: correlation does imply causation—it is just that a single correlation does not provide sufficient evidence to show what sort of causation is involved. By triangulating multiple correlations and by observing the order of events, precise evidence of causation can be obtained, solely by the observation of correlations. In fact, as David Hume pointed out, correlation is the only evidence we ever have for the existence of causal relationships (see my longer discussion of this issue).
A more substantial problem, and the problem that is explicitly referenced in the ETAG discussion, is the difficulty of managing “confounding” variables: the quality of the teacher and her relationship with the students, the socio-economic background of the students, the culture of the school, the relevance of prior learning and attitudes etc. Managing these confounding variables is exactly what Randomized Control Trials (RCTs) are designed to do. The introduction of RCTs to education has been the subject of a high-profile campaign by Ben Goldacre and has been the explicit aim of a £110 million DfE grant to the Education Endowment Foundation in 2010. But nowhere is the subject of RCTs or their use in education discussed in the ETAG report.
It is not plausible that confounding variables should be removed: one cannot pursue educational research in the equivalent of a sterilised vacuum, without the presence of a teacher or with students who are somehow wiped clean of their own assumptions, attitudes and prior experience before entering the classroom. RCTs do not seek to eliminate these confounding variables. Instead they seek to eliminate any distortion that they might cause by randomizing them. One group works with the intervention that is being evaluated, one group works without the intervention; but in all other respects, both groups contains a randomized and broadly equal representation of other influences. This technique works perfectly well in medicine and there is in principle no reason why the method should not be successfully applied in education.
I say that there is no reason “in principle” because it is true that in practice there are difficulties.
The complexity of educational objectives
Problem 1. The metrics that are used to measure success in education are less clear and more contested than are the metrics used in medicine. Health may be reliably indicated by temperature and blood pressure, while “being good at Maths” is rather harder to define, let alone quantify. This is an issue that is referenced by the ETAG report which says on page 11, reasonably enough:
At the heart of this is a debate about learning outcomes—which are themselves complex and multi-variate.
However, this problem is not nearly as significant as is often supposed, for two reasons.
The most significant reason is that some outcomes in education are remarkably straightforward. The ability of a student to perform basic addition or to recall the meaning of a defined list of French words is easy to assess and quantify. If you are addressing the use of technology as a pedagogical tool (and not as a curriculum objective) then the effectiveness of that tool can demonstrated against easily assessed learning objectives just as well as against hard-to-assess objectives. You do not have to wait until we can assess every soft skill in every different applied context. And if it is not possible to deploy technology successfully in pursuit of straightforward, mechanical learning objectives, how likely is it that we will be able to deploy it successfully in the pursuit of more complex skills like creativity and teamwork?
Second, I would argue that an important benefit of education technology will be to use data analytics to clarify the way complex learning objectives are defined and how the achievement of those objectives is expressed. I will not elaborate further here as it is a detailed discussion and the first point is sufficient to make my case. But the fact that the effectiveness of ed-tech can be demonstrated in the pursuit of easily achieved and easily assessed learning objectives should not in any way be taken to imply that ed-tech will narrow the curriculum to its more mechanical aspects. On the contrary, it is only technology-enabled data analytics that will be able to give proper weight to the softer skills that teachers, parents and prospective employers rightly value and that traditional exams find so hard to measure.
The poor availability of randomized educational data
Problem 2. It is difficult to randomize school children, who are normally grouped into classes and schools, which are subject to similar influences. What might appear on the surface to be a statistically significant sample of 1,000 students becomes much less impressive when you realise that all those students are drawn from a mere dozen different schools. This means that in order to achieve statistically useful random samples, very large samples must be used.
Problem 3. Schools are data-poor. This may not be immediately apparent to teachers, who often complain that they are drowning in data—but you can drown in half an inch of water if you are flat on your face in the gutter. Teachers are generally overwhelmed by data because of the inadequacy of their data-handling systems and the doubtful reliability of the much educational data, not because of the amount of the data itself. Data is commonly circulated in manually constructed spreadsheets. Real-time data, based on the levelling system, is now recognised to have been so unreliable that the whole system has now been abolished. Summative exam data is not much more reliable than teacher assessments and normally arrives too late to be of much use in the teaching process. The only data that is reasonably reliable, timely and collected consistently measures whether children turn up in the morning. As Donald Clarke has commented, if you are measuring bums on seats, you are measuring the wrong end of the learner. It is because schools are so data-poor that researchers have to devise their own customised methods of collecting data for the purposes of research. This is not only expensive, but also exposes the research to distorting observation bias.
Problem 4. As soon as students realise that their work is the subject of research, they are likely to try harder (this is the Hawthorne Effect). As soon as teachers and researchers are aware that an intervention is being made that is expected to lead to improved learning, they will tend to see the improvement that they expect (this is the Pygmalion Effect). Like medicine’s Placebo Effect, all these distorting influences can be eliminated by properly structured RCTs, which are commonly used in medical research but hardly ever in education.
These problems—the lack of data, the difficulty of randomization, and the distortion of data by observation bias—can all be solved by harvesting learning outcome data by digital means, routinely and automatically as part of the everyday teaching and learning process. This is what ed-tech will allow, when the right sort of ed-tech has been built. Given routine data harvesting and automatic analytics, neither students nor teachers will have to be made aware that they are the subjects of research, while at the same time allowing the collection of large quantities of data from properly randomized samples at low cost. The problems with educational research, such as they are, will be solved by the application of data analytics.
This represents another important answer to the government’s question about “how the education system might act to get the best out of technology”, beyond the three already suggested by Nicky Morgan. Needless to say, it is not an answer that was given any consideration by ETAG.
Having admitted the difficulties of research, it cannot be said that no significant research has been carried out, or that the underwhelming results of the research that has been done can be easily explained away. Large amounts of money were thrown at the problem under Becta, notably on the ImpaCT reports in the early part of the noughties and the ICT Testbed research in 2006-8. The latter covered 30 schools, spending an average of £1 million per school on technology, and showed no significant impact on learning outcomes at KS3 or KS4, and a very small impact at KS2 that is most plausibly explained by a combination of the Hawthorne Effect and regression to the mean (the tendency of the poorly performing schools that were chosen for the trial to improve anyway). The fact that quantitative research in education has its challenges is not a reason to ignore the results of the significant amount of quantitative research that has been conducted in this area.
Problem 5. The fifth problem does not apply to educational research in general, but only to the particular circumstance and the general approach envisaged by the ETAG report. When arguing that “evidence is a problematic concept”, the report cites the difficulty of establishing the causal effect of “the introduction of phones” into the classroom. This is the problem of what I would term “causal distance”.
As a child, I enjoyed playing a game called “Mousetrap”, in which, as you moved your mouse tokens round the playing board, you could assemble an elaborate Heath Robinson apparatus which could then be used to capture you opponent’s mice when they moved into the danger zone. When the apparatus was complete, you could turn a handle, which released a spring, which nudged a ball, which fell onto a lever, which catapulted a model figure, which dislodged a cage, which fell onto the mouse, which, if everything went exactly to plan, was then trapped. Such was the “causal distance” between the first turning of the lever and the final falling of the cage that the process often went wrong and your opponent’s mouse was able to continue on its way unscathed.
There is a similar “causal distance” between the acquisition of technical hardware (such as mobile phones) and the effective use of those phones for learning. The assumption of the ETAG report is that this causal gap will be filled by the actions (of unpredictable and variable effectiveness) of the classroom teacher, who is responsible for devising ways actually to use the phones. This problem follows from the common but unhelpful identification of “technology” with “hardware”. Time again one reads reports and blogs which use “ed-tech” interchangeably with “iPads”, “mobile phones” or “BYOD”.
In other sectors, which generally have a much better record than education in making productive use of technology, it is almost never the case that manager will say, “let’s buy some computers to improve our business”. They say, “let’s acquire a logistics system or some book-keeping software or an online shop-front to improve our business”. Of course they will need some hardware too—but that is an after-thought. No-one really cares whether the hardware is in people’s pockets, on people’s desktops, or sitting in the cloud: it’s the software that delivers the service and its the service that counts.
The ETAG report reflects the widespread assumption that technology means hardware and that the way that that hardware is used, in combination with some generic and probably bundled software like web browsers and word processors, inevitably lies in the hands of the teachers. This is why the report echoes the general perception that “it’s not about the technology”:
the effectiveness of digital technology to enhance learning is dependent on how it is used.
Although this statement is self-evidently true, it is not necessarily very significant. Much technology only has a very limited range of uses. It may be that computer hardware can be used in many different ways: to play games, to watch videos, to communicate with friends, to write a novel or to crunch he data produced by radio signals received from distant galaxies in the search for extra-terrestrial life. Sage One Cashbook is a commercial software package that can only be used in a single way: it helps you keep your accounts.
The assumption that the effectiveness of ed-tech depends on how it is used assumes that ed-tech is about hardware and generic software (like browsers) which do not themselves determine how they are used. This is part of the paradigm which has been responsible for the ineffective deployment of education technology over the last 20 years. It creates an impossibly large “causal distance” between the acquisition of hardware and the delivery of learning gains and an absurdly unrealistic expectation that teachers have the ability to fill the gap with serious technical innovation (I have addressed the importance of technology and the inability of teachers to lead innovation elsewhere).
As a History Teacher, I might need timeline editors and software that allows students to match evidence to assertions. As a Maths Teacher I might need an easy-to-use equation editor that allows learners intuitively to manipulate algebraic formulae. As a Chemistry Teacher I might need flexible laboratory simulations and molecular modelling software. As a foreign language teacher I might need intelligent language laboratories providing simulated conversation. In all subjects I would need software that manages the process of assignment, the sequencing of learning activities, the aggregation and analysis of outcome data, and the management of student portfolios. None of this software exists in satisfactory form and none of this software can, even in our wildest dreams, be developed and maintained by front-line classroom teachers.
If it did exist, the effectiveness of these software components would be much easier to test because the “causal distance” between technology and outcome would be so much less. When you consider the missing piece, the software that has been built to address a particular requirement, then the dichotomy proposed by the ETAG report, between “digital technology” and “how it is used”, vanishes into thin air. It is not up to the person working on the checkout at Tesco to decide how to use the point-of-sale software installed on the till: the way the software can be used has been predetermined by its designer. In effective applications, “digital technology” and “how it is used” are so closely related as to be virtually the same thing.
If education technology (the term that the Education Technology Action Group never uses) is understood to refer to technology that is designed specifically to address the needs of education, then we are clearly talking about software, not hardware. And when we understand what we are talking about, we will then understand the reasons, both why ed-tech has failed to deliver learning gains and why ed-tech is so difficult to evaluate. The reason why ed-tech hasn’t worked is that there isn’t any – at least not to speak of.
The ETAG report deliberately ignores the evidence which shows that the models of technology in education advocated by the report have already been shown not to work. The significance of this evidence is dismissed for unfounded reasons, while no consideration is given to what education technology is or the potential of ed-tech to address the problems that do exist.
That ETAG was deficient in terms of argument and process
In a recent article in the Guardian, Tim Lott wrote that
One very key element of the liberal left has long been under threat: its liberalism – that is, its willingness to debate with anything outside a narrow range of opinions within its own walls. And the more scary and incomprehensible the world becomes, the more debate is replaced by edict and prejudice.
To have been effective, the ETAG report needed to take account of divergent opinions, honing its recommendations through a process of open and constructive debate and paying close attention to a wide range of evidence. Instead, it shut out dissenting opinions, paid little attention to any evidence, declared what is was going to say before consulting with others, and contented itself with reproducing the pre-judgements of the small, self-selected group at the heart of the group.
When the project was first announced, it was said that it would take evidence from the wider community by Twitter. Twitter is a poor medium through which to conduct any sort of complex debate and I can only suppose that it was chosen to illustrate the sort of edgy paradigm-shift that the ETAG leaders like to talk about. For reasons I have covered elsewhere, the Twitter conversation was a complete failure and it comes as no surprise that the final report did not make a single reference to its own consultation. Nor was any other form of formal consultation put in its place.
Even if the medium had been better chosen, it is clear that the ETAG leadership had little appetite for a genuine consultation in the first place. The Chairman published a detailed, 2,500 word “agreed futures” document, in April 2014, right at the beginning of the ETAG process and before any consultation had occurred. This document outlined what would be the main conclusions of the report, declaring that:
without surprise – there is very broad agreement that those are indeed directions in which we are heading…[but] we would welcome braver thoughts and suggestions from all of you, to sit alongside our own thoughts.
Right from the beginning of the process, it was clear that there was no opportunity for anyone to challenge the Chairman’s basic assumptions, even, apparently, within the ETAG group itself. I met a number of members of the group at BETT and in the months before the publication of the report, all of whom expressed dissatisfaction about the process by which the group had been run. The final text of the report, which was hurriedly assembled by email in the two days before the group’s final deadline, was not signed off by the whole group, as even a set of routine meeting minutes requires. In the words of the introduction (page 3):
Stephen Heppell… authored the approach presented here on the basis of contributions by other ETAG members.
Nor does the report welcome the prospect of criticism (page 7):
We are aware that there will be gainsayers too; education suffers from more than enough of them.
At the same time, the report repeatedly asserts the authority of its “experienced group” (page 4) that made up the committee, with its “collective long years of experience” (page 4), which “drew on their own very considerable experience and wisdom” (page 6), having been “chosen to reflect a vast range of experience from across the education and technology sectors” (page 21). This claim to authority replaced any serious consideration of evidence or of divergent opinions. Asserting the truth of what you say on the basis of who you are, rather than what you can say in justification, is characteristic of the fundamentally illiberal position that Tim Lott complains is becoming increasingly characteristic of left wing thought.
While criticising the emphasis laid in the report on the group’s experience, I must emphasise that I do not question that experience itself. Given the lack of evidence of any impact of education technology to date, to which I have already referred, it has to be said that the collective experience of the group is one of failure. Even if such experience is itself valuable, in a situation in which success still lies in the future, experience, however useful it turns out to be, cannot be presented as badge of authority. Proposals for action must be presented with circumspection and must be willing to justify themselves in the face of criticism.
There are several themes within the report with which I agree. For example, there is an important passage on page 11, which is characteristic of much of the work by Professor Diana Laurillard, which argues that at the beginning of a project one must look to the sort of pragmatic imagination that is best encapsulated in the language of engineering, rather than in the language of science, which can only study things that have already happened.
It is not the experience, nor even the expertise of the ETAG group that I deny; it is not the individual members of the ETAG group that I attack; nor all of their arguments that I dismiss. It is rather the way that these individual contributions have been harnessed into a pre-determined set of recommendations. It is above all the tone of the report, produced as a result of a flawed process, subject to the insufficiently examined group-think of a community that has become habituated to a role of advocacy rather than of practical implementation. It is a tone that presents what is in truth the community’s rather tenuous expertise as a badge of unquestionable authority, an excuse for the omission of any serious justification or debate, and the grounds for a general exhortation to something which comes very close to a moral crusade. Alongside a number of thoroughly flawed arguments, it is the report’s techno-zealotry that so comprehensively undermines its credibility.
I have made three criticisms of the ETAG report:
- that it ignores its remit in seeking to recommend what should be taught and not how it should be taught;
- that it dismisses the important evidence that shows that the recommendations of the group are ineffective (at least in respect of the question that the group was asked);
- that the process was intolerant of open debate or any disagreement with the views of the core leadership of the group.
In the end, these are all different aspects of the same criticism. It is that the ETAG report, and most of the activity of the “technology in education” community, represents an ideological, not technocratic position, a position that seeks to change the purpose of education and not to improve its methods. That is why it addresses ends and not means; that is why it ignores evidence; and that is why is does not tolerate challenge and debate.
While I believe that it has been necessary to offer a rebuttal of the ETAG report, I do not believe there is any benefit to be gained in continuing the debate about ETAG itself. I have made very similar arguments on this blog for more than three years and have received no considered response from those who support the ETAG line.
The great hope in this otherwise rather dismal situation is the reappointment of Nicky Morgan at the Department of Education. It is not that I have particular views on her politics or her personal abilities, but that the only way to break the intellectual stranglehold that the ETAG report expresses is by government action. In an area such as ed-tech, which I suspect is regarded in Whitehall as somewhat esoteric and specialised, if not positively cranky proposition, the culture of the generalist politician pursuing a succession of short postings is the enemy of effective action.
When Michael Gove decided that more needed to be done with ed-tech, it was natural that he should turn for advice to those figures whom his officials told him were leaders in the field, collectively parroting views which have been accepted as orthodoxy for twenty years. If, in the aftermath of the recent election, ed-tech is now to fall off the political agenda again and we have to wait for another four years for another Secretary of State to express a renewed interest, it is very likely that this future Secretary of State would, in their ignorance, again turn to the same familiar positions as were produced by the ETAG group.
Nicky Morgan is different. She has been able to follow the ETAG process. She asked a series of pertinent questions of the ETAG group and, having read their report, saw that her questions were not answered. She has caught a glimpse, however brief, of the emperor’s nakedness. She now comes back into office understanding that of all the “barriers to the growth of innovative learning technology” that the ETAG group was tasked to identify, the most significant is the views of the ETAG group itself. In this way, the lack of understanding at Ministerial level of the current ed-tech landscape has been overcome. There is no better moment for the introduction a set of decisive ed-tech policies that will support the achievement of government objectives.
There are at least five positive reasons why such a decisive initiative can now be expected.
- Given the circumstances of the election, the new government enters office with degree of reforming authority (and indeed expectation) that no government has had since 1997.
- One reason why the Blairite ed-tech initiative had such disappointing results is that we did not then have the infrastructural foundations that ed-tech really requires: mobile (which puts us in reach of 1:1 device ratios), cloud (which gives us cheap, easily acquired services and the potential for strong data interoperability), and increasingly easy-to-use user interfaces. We can do things in 2015 that were simply not possible 15 or 20 years ago.
- The under-performance of Western education systems, highlighted by the PISA reports, and the relationship of this issue to problems of employment, productivity, social mobility, standards of living and social cohesion (let alone to personal fulfilment) lends real political urgency to educational reform.
- The theoretical foundation on which orthodox views of ed-tech have been overlaid—theories that emphasise twenty-first century skills and independent learning—has been very seriously undermined, if not largely destroyed, by the last government’s structural reforms of education in combination with a new generation of educational writers and bloggers that includes Daisy Christodoulou, Tom Bennett, Andrew Old and Robert Peal.
- There are important synergies between ed-tech and many of the other political priorities to have emerged from the DfE over the last five years: the requirement for better educational research; the need for more effective and less intrusive methods of monitoring than have been provided by Ofsted; for more accurate and less intrusive forms of embedded assessment in a world without levels; for more consistent pedagogical practice in the classroom; for a reduction in teacher workload and stress; for new curricula, such as technical and vocational education, where these are deemed desirable by key stakeholders.
Ed-tech can contribute to the solution all of these problems. I would go further and argue that it is difficult to conceive realistic solutions to most of these problems without ed-tech.
That is why I am so confident in predicting a new and significant initiative, based on a new conception of what ed-tech involves. It will not lay aside the experience of the ed-tech community, including many of those who were involved with the ETAG group. What it will do is to require this community to engage in a new, more open and more searching debate that will test our assumptions and stimulate a more productive synergy between our different experiences and insights. Such a debate will also include other figures, not traditionally identified with the ed-tech community but who will have important contributions to make in developing a coherent ed-tech policy: people like Tim Oates, who has campaigned for better textbooks (an argument that is largely transferable to digital courseware); Amanda Spielman, who has considerable insight into the contribution that data analytics can make towards better assessment through her reforming work at Ofqual; Daisy Christodoulou and Rob Coe, who are sitting on the government’s advisory panel on assessment without levels; representatives of the technical community for data interoperability that is growing up around the TinCan/Experience API initiative in the United States; representatives of the ed-tech supplier industries, on whom we will rely to provide the missing activity and management software on which progress will depend; as well as including the views of teachers and learners,ed-tech sceptics just as much as ed-tech enthusiasts.
So favourable are the circumstances to the launch of such a significant new initiative that if it does not happen I will join a well-known member of the Liberal Democrat Party in eating my hat.