After ETAG

After the ball by Ramon Casas Y CarboAfter the ETAG report failed to provide a coherent route map, the new government now has a once-in-a-decade opportunity to develop a radical new approach to ed-tech.

In the autumn of 2013, I welcomed the Coalition government’s revival of interest in ed-tech after four years of neglect (see “Land ho!” of December 2013). But the process of bringing the ship into port does not always run smoothly. If last January’s report from the Education Technology Action Group (ETAG) is of any value, it is only because it shows so clearly how muddled is the thinking of the ed-tech community in the UK.

I do not welcome the opportunity to produce another negative critique (indeed, I have hesitated for four months before doing so). I would much rather move on to make a positive contribution to a coherent discussion about effective ed-tech policy. But so long as a group such as ETAG, established with some fanfare by Ministers, produces such a poorly reasoned argument, there seems to be little option but to offer a rebuttal.

What I want to emphasise at the end of this piece, however, is that the failure of ETAG provides an important opportunity to put aside the muddled vision of technology in education that has dominated our discourse for the last 20 years. Following last week’s election, we have in 2015 the best opportunity since 1997 to make a fresh start and introduce a truly effective model of education technology into our schools.

The criticism of ETAG

I make my criticism of ETAG under three headings:

  • that ETAG failed to answer the question it was asked;
  • that ETAG failed to address the evidence;
  • that ETAG was deficient in terms of argument and process.

That ETAG failed to answer the question it was asked

The ETAG remit is stated on Professor Stephen Heppell’s blog as being:

to best support the agile evolution of the FE, HE and schools sectors in anticipation of disruptive technology for the benefit of learners, employers & the UK economy…[and] to identify any barriers to the growth of innovative learning technology that have been put in place (inadvertently or otherwise) by the Governments, as well as thinking about ways that these barriers can be broken down.

In a letter in February, Minister Nick Boles simplified this remit to the following: ETAG’s job has been to advise Ministers on “how the education system might act to get the best out of technology”.

In her speech to BETT on 21 January, Nicky Morgan went further, suggesting some of the ways in which she hoped the question would be answered.

I look forward to studying the group’s report, which will be published today. But as I do so, I will be looking for ideas in a number of areas where I think technology can transform the educational landscape.

The first is accountability…

The second area I would like to look at is assessment and reporting…

Finally, I believe technology can play a critical role in helping to deliver one of my major priorities: reducing teacher workload.

Improving accountability, improving assessment and reporting and reducing teacher workload: these are just some of the obvious ways in which education might be able to “get the best out of technology”. But if you read the ETAG report to find a vision of how these benefits might be realized, you will be disappointed. While technology routinely saves work in almost every business sector to which it is applied, all that ETAG says about the effect of its recommendations is that workload is likely to increase, at least in the short term (page 29).

Teachers who move to online teaching will be aware of a significant, but only initial, increase in their workload, if they are setting out to make optimal use of the technology.

You would expect that the issue of how technology can help to improve assessment and accountability would be addressed in the section entitled “Assessment and accountability”. Again, you would be disappointed. The section starts by saying: (page 17).

Government policy, in the form of the National Curriculum and the strategic direction set for assessment and accountability, is a critical lever for influencing educational practice

and it continues to explain how assessment and accountability should be harnessed to enforce technology-based teaching on schools. But it says nothing about how technology can improve assessment and accountability.

Nor does the report propose any other ways in which education can “get the best out of technology”. Instead, it chose to address a completely different question—a question that it was never asked. The critical passage occurs on page 7:

Fundamentally, we concluded that the use of digital technology in education is not optional. Competence with digital technology to find information, to create, to critique and share knowledge, is an essential contemporary skill set.

This key passage expresses the report’s whole argument on why technology should be more widely used in education. It has nothing at all to do with education “getting the best out of technology”, nothing to do with the use of technology as a pedagogical and management tool to improve the quality of teaching and learning. It expresses its argument solely in terms of curriculum objectives: the technology-dependent skillset that it thinks every modern school child should learn. The question was about how we should teach: ETAG has given an answer which is about what we teach.

I have frequently questioned the value of what are generally called “twenty-first century skills”, but there is no point in renewing such a discussion here. All that matters is that ETAG’s fundamental conclusion, right or wrong, is simply irrelevant. A report which is supposedly about “education technology” does not use that term once, beyond the title page. A report which was meant to make recommendations on how technology could improve education instead focused exclusively on how education should spend more time teaching about technology. They addressed the wrong subject and answered the wrong question.

That ETAG failed to address the evidence

The evidence of the impact of technology

If the ETAG group had addressed the question it was asked, it would have quickly come against the difficulty that, although very large amounts of money have been spent on promoting technology in education, there is precious little evidence that it has had any impact on the quality of teaching and learning. The meta-analysis conducted by Professor Higgins for the Education Endowment Foundation summarises the evidence so (on page 3):

Taken together, the correlational and experimental evidence does not offer a convincing case for the general impact of digital technology on learning outcomes.

The recent NESTA report, Decoding Learning, draws a similar conclusion (on page 8):

Evidence of digital technologies producing real transformation in learning and teaching remains elusive.

Professor Angela MacFarlane (a member of ETAG), in her most recent book, Authentic Learning for the Digital Generation, draws the same conclusion again (in Chapter 1, What to do with the technology once you have it?):

There is a long history of providing digital technologies to schools and looking for impact through improvements in standardised test scores. Evidence of success measured in this way is scarce and, where it does exit it is restricted to small-scale projects with high levels of technology provision and/or staffing that are not sustainable in the long term and certainly not scalable to whole-school systems.

Any credible answer to the question about how technology can improve teaching and learning cannot avoid starting by asking why previous efforts to use education technology have been so unsuccessful.

ETAG’s response

The ETAG report does not make any reference to this conspicuous lack of evidence. Instead, it dismisses the importance of evidence itself, stating on page 11 that:

Evidence is a problematic concept when thinking about digital technology in education because:

  • it is very difficult to show a clear causal relationship between a single variable (such as the introduction of phones) and learning outcomes…
  • digital technology has changed the nature of disciplines outside school and thus should also impact on the curriculum inside school…[but] the existing metrics in education…have not been changed enough to reflect this…
  • the effectiveness of digital technology to enhance learning is dependent upon how it is used, and very subtle differences in the way it is implemented can have large impacts, thus it is difficult to make valid generalisations between different implementations/contexts.

As already discussed, the second of these points is irrelevant. It addresses the aims of education and not the means. It may or may not be true that the nature of academic disciplines has fundamentally changed (personally, I think that it is not true) but it should be possible to use a pedagogical tool to improve the teaching of Ancient Greek, exactly as the ancient Greeks understood it, just as much as to improve the teaching of the most cutting-edge version of Computer Science. Whether you would want to teach Ancient Greek in preference to Computer Science is a different question.

The first and the third bullet points make exactly the same argument. They both claim is that it is “very difficult” to show a causal relationship between a single variable (such as the introduction of phones) and subsequent learning outcomes because there are so many other variables that are likely to influence the outcomes.

The report does not enter into any further discussion about why it is not possible to isolate these other, confounding variables. By suggesting that “evidence is a problematic concept”, it gives a heavy hint that the problem is one of fundamental principle and is therefore insoluble.

Some people might assume from this language that quantitative evidence is intrinsically unsuited to the investigation of causal relationships because, as it is often said, “Correlation does not imply causation”. As I have explained on this blog before, this popular aphorism is simply untrue: correlation does imply causation—it is just that a single correlation does not provide sufficient evidence to show what sort of causation is involved. By triangulating multiple correlations and by observing the order of events, precise evidence of causation can be obtained, solely by the observation of correlations. In fact, as David Hume pointed out, correlation is the only evidence we ever have for the existence of causal relationships (see my longer discussion of this issue).

A more substantial problem, and the problem that is explicitly referenced in the ETAG discussion, is the difficulty of managing “confounding” variables: the quality of the teacher and her relationship with the students, the socio-economic background of the students, the culture of the school, the relevance of prior learning and attitudes etc. Managing these confounding variables is exactly what Randomized Control Trials (RCTs) are designed to do. The introduction of RCTs to education has been the subject of a high-profile campaign by Ben Goldacre and has been the explicit aim of a £110 million DfE grant to the Education Endowment Foundation in 2010. But nowhere is the subject of RCTs or their use in education discussed in the ETAG report.

It is not plausible that confounding variables should be removed: one cannot pursue educational research in the equivalent of a sterilised vacuum, without the presence of a teacher or with students who are somehow wiped clean of their own assumptions, attitudes and prior experience before entering the classroom. RCTs do not seek to eliminate these confounding variables. Instead they seek to eliminate any distortion that they might cause by randomizing them. One group works with the intervention that is being evaluated, one group works without the intervention; but in all other respects, both groups contains a randomized and broadly equal representation of other influences. This technique works perfectly well in medicine and there is in principle no reason why the method should not be successfully applied in education.

I say that there is no reason “in principle” because it is true that in practice there are difficulties.

The complexity of educational objectives

Problem 1. The metrics that are used to measure success in education are less clear and more contested than are the metrics used in medicine. Health may be reliably indicated by temperature and blood pressure, while “being good at Maths” is rather harder to define, let alone quantify. This is an issue that is referenced by the ETAG report which says on page 11, reasonably enough:

At the heart of this is a debate about learning outcomes—which are themselves complex and multi-variate.

However, this problem is not nearly as significant as is often supposed, for two reasons.

The most significant reason is that some outcomes in education are remarkably straightforward. The ability of a student to perform basic addition or to recall the meaning of a defined list of French words is easy to assess and quantify. If you are addressing the use of technology as a pedagogical tool (and not as a curriculum objective) then the effectiveness of that tool can demonstrated against easily assessed learning objectives just as well as against hard-to-assess objectives. You do not have to wait until we can assess every soft skill in every different applied context. And if it is not possible to deploy technology successfully in pursuit of straightforward, mechanical learning objectives, how likely is it that we will be able to deploy it successfully in the pursuit of more complex skills like creativity and teamwork?

Second, I would argue that an important benefit of education technology will be to use data analytics to clarify the way complex learning objectives are defined and how the achievement of those objectives is expressed. I will not elaborate further here as it is a detailed discussion and the first point is sufficient to make my case. But the fact that the effectiveness of ed-tech can be demonstrated in the pursuit of easily achieved and easily assessed learning objectives should not in any way be taken to imply that ed-tech will narrow the curriculum to its more mechanical aspects. On the contrary, it is only technology-enabled data analytics that will be able to give proper weight to the softer skills that teachers, parents and prospective employers rightly value and that traditional exams find so hard to measure.

The poor availability of randomized educational data

Problem 2. It is difficult to randomize school children, who are normally grouped into classes and schools, which are subject to similar influences. What might appear on the surface to be a statistically significant sample of 1,000 students becomes much less impressive when you realise that all those students are drawn from a mere dozen different schools. This means that in order to achieve statistically useful random samples, very large samples must be used.

Problem 3. Schools are data-poor. This may not be immediately apparent to teachers, who often complain that they are drowning in data—but you can drown in half an inch of water if you are flat on your face in the gutter. Teachers are generally overwhelmed by data because of the inadequacy of their data-handling systems and the doubtful reliability of the much educational data, not because of the amount of the data itself. Data is commonly circulated in manually constructed spreadsheets. Real-time data, based on the levelling system, is now recognised to have been so unreliable that the whole system has now been abolished. Summative exam data is not much more reliable than teacher assessments and normally arrives too late to be of much use in the teaching process. The only data that is reasonably reliable, timely and collected consistently measures whether children turn up in the morning. As Donald Clarke has commented, if you are measuring bums on seats, you are measuring the wrong end of the learner. It is because schools are so data-poor that researchers have to devise their own customised methods of collecting data for the purposes of research. This is not only expensive, but also exposes the research to distorting observation bias.

Problem 4. As soon as students realise that their work is the subject of research, they are likely to try harder (this is the Hawthorne Effect). As soon as teachers and researchers are aware that an intervention is being made that is expected to lead to improved learning, they will tend to see the improvement that they expect (this is the Pygmalion Effect). Like medicine’s Placebo Effect, all these distorting influences can be eliminated by properly structured RCTs, which are commonly used in medical research but hardly ever in education.

These problems—the lack of data, the difficulty of randomization, and the distortion of data by observation bias—can all be solved by harvesting learning outcome data by digital means, routinely and automatically as part of the everyday teaching and learning process. This is what ed-tech will allow, when the right sort of ed-tech has been built. Given routine data harvesting and automatic analytics, neither students nor teachers will have to be made aware that they are the subjects of research, while at the same time allowing the collection of large quantities of data from properly randomized samples at low cost. The problems with educational research, such as they are, will be solved by the application of data analytics.

This represents another important answer to the government’s question about “how the education system might act to get the best out of technology”, beyond the three already suggested by Nicky Morgan. Needless to say, it is not an answer that was given any consideration by ETAG.

Having admitted the difficulties of research, it cannot be said that no significant research has been carried out, or that the underwhelming results of the research that has been done can be easily explained away. Large amounts of money were thrown at the problem under Becta, notably on the ImpaCT reports in the early part of the noughties and the ICT Testbed research in 2006-8. The latter covered 30 schools, spending an average of £1 million per school on technology, and showed no significant impact on learning outcomes at KS3 or KS4, and a very small impact at KS2 that is most plausibly explained by a combination of the Hawthorne Effect and regression to the mean (the tendency of the poorly performing schools that were chosen for the trial to improve anyway). The fact that quantitative research in education has its challenges is not a reason to ignore the results of the significant amount of quantitative research that has been conducted in this area.

Causal distance

Problem 5. The fifth problem does not apply to educational research in general, but only to the particular circumstance and the general approach envisaged by the ETAG report. When arguing that “evidence is a problematic concept”, the report cites the difficulty of establishing the causal effect of “the introduction of phones” into the classroom. This is the problem of what I would term “causal distance”.

As a child, I enjoyed playing a game called “Mousetrap”, in which, as you moved your mouse tokens round the playing board, you could assemble an elaborate Heath Robinson apparatus which could then be used to capture you opponent’s mice when they moved into the danger zone. When the apparatus was complete, you could turn a handle, which released a spring, which nudged a ball, which fell onto a lever, which catapulted a model figure, which dislodged a cage, which fell onto the mouse, which, if everything went exactly to plan, was then trapped. Such was the “causal distance” between the first turning of the lever and the final falling of the cage that the process often went wrong and your opponent’s mouse was able to continue on its way unscathed.

There is a similar “causal distance” between the acquisition of technical hardware (such as mobile phones) and the effective use of those phones for learning. The assumption of the ETAG report is that this causal gap will be filled by the actions (of unpredictable and variable effectiveness) of the classroom teacher, who is responsible for devising ways actually to use the phones. This problem follows from the common but unhelpful identification of “technology” with “hardware”. Time again one reads reports and blogs which use “ed-tech” interchangeably with “iPads”, “mobile phones” or “BYOD”.

In other sectors, which generally have a much better record than education in making productive use of technology, it is almost never the case that manager will say, “let’s buy some computers to improve our business”. They say, “let’s acquire a logistics system or some book-keeping software or an online shop-front to improve our business”. Of course they will need some hardware too—but that is an after-thought. No-one really cares whether the hardware is in people’s pockets, on people’s desktops, or sitting in the cloud: it’s the software that delivers the service and its the service that counts.

The ETAG report reflects the widespread assumption that technology means hardware and that the way that that hardware is used, in combination with some generic and probably bundled software like web browsers and word processors, inevitably lies in the hands of the teachers. This is why the report echoes the general perception that “it’s not about the technology”:

the effectiveness of digital technology to enhance learning is dependent on how it is used.

Although this statement is self-evidently true, it is not necessarily very significant. Much technology only has a very limited range of uses. It may be that computer hardware can be used in many different ways: to play games, to watch videos, to communicate with friends, to write a novel or to crunch he data produced by radio signals received from distant galaxies in the search for extra-terrestrial life. Sage One Cashbook is a commercial software package that can only be used in a single way: it helps you keep your accounts.

The assumption that the effectiveness of ed-tech depends on how it is used assumes that ed-tech is about hardware and generic software (like browsers) which do not themselves determine how they are used. This is part of the paradigm which has been responsible for the ineffective deployment of education technology over the last 20 years. It creates an impossibly large “causal distance” between the acquisition of hardware and the delivery of learning gains and an absurdly unrealistic expectation that teachers have the ability to fill the gap with serious technical innovation (I have addressed the importance of technology and the inability of teachers to lead innovation elsewhere).

As a History Teacher, I might need timeline editors and software that allows students to match evidence to assertions. As a Maths Teacher I might need an easy-to-use equation editor that allows learners intuitively to manipulate algebraic formulae. As a Chemistry Teacher I might need flexible laboratory simulations and molecular modelling software. As a foreign language teacher I might need intelligent language laboratories providing simulated conversation. In all subjects I would need software that manages the process of assignment, the sequencing of learning activities, the aggregation and analysis of outcome data, and the management of student portfolios. None of this software exists in satisfactory form and none of this software can, even in our wildest dreams, be developed and maintained by front-line classroom teachers.

If it did exist, the effectiveness of these software components would be much easier to test because the “causal distance” between technology and outcome would be so much less. When you consider the missing piece, the software that has been built to address a particular requirement, then the dichotomy proposed by the ETAG report, between “digital technology” and “how it is used”, vanishes into thin air. It is not up to the person working on the checkout at Tesco to decide how to use the point-of-sale software installed on the till: the way the software can be used has been predetermined by its designer. In effective applications, “digital technology” and “how it is used” are so closely related as to be virtually the same thing.

If education technology (the term that the Education Technology Action Group never uses) is understood to refer to technology that is designed specifically to address the needs of education, then we are clearly talking about software, not hardware. And when we understand what we are talking about, we will then understand the reasons, both why ed-tech has failed to deliver learning gains and why ed-tech is so difficult to evaluate. The reason why ed-tech hasn’t worked is that there isn’t any – at least not to speak of.


The ETAG report deliberately ignores the evidence which shows that the models of technology in education advocated by the report have already been shown not to work. The significance of this evidence is dismissed for unfounded reasons, while no consideration is given to what education technology is or the potential of ed-tech to address the problems that do exist.

That ETAG was deficient in terms of argument and process

In a recent article in the Guardian, Tim Lott wrote that

One very key element of the liberal left has long been under threat: its liberalism – that is, its willingness to debate with anything outside a narrow range of opinions within its own walls. And the more scary and incomprehensible the world becomes, the more debate is replaced by edict and prejudice.

To have been effective, the ETAG report needed to take account of divergent opinions, honing its recommendations through a process of open and constructive debate and paying close attention to a wide range of evidence. Instead, it shut out dissenting opinions, paid little attention to any evidence, declared what is was going to say before consulting with others, and contented itself with reproducing the pre-judgements of the small, self-selected group at the heart of the group.

When the project was first announced, it was said that it would take evidence from the wider community by Twitter. Twitter is a poor medium through which to conduct any sort of complex debate and I can only suppose that it was chosen to illustrate the sort of edgy paradigm-shift that the ETAG leaders like to talk about. For reasons I have covered elsewhere, the Twitter conversation was a complete failure and it comes as no surprise that the final report did not make a single reference to its own consultation. Nor was any other form of formal consultation put in its place.

Even if the medium had been better chosen, it is clear that the ETAG leadership had little appetite for a genuine consultation in the first place. The Chairman published a detailed, 2,500 word “agreed futures” document, in April 2014, right at the beginning of the ETAG process and before any consultation had occurred. This document outlined what would be the main conclusions of the report, declaring that:

without surprise – there is very broad agreement that those are indeed directions in which we are heading…[but] we would welcome braver thoughts and suggestions from all of you, to sit alongside our own thoughts.

Right from the beginning of the process, it was clear that there was no opportunity for anyone to challenge the Chairman’s basic assumptions, even, apparently, within the ETAG group itself. I met a number of members of the group at BETT and in the months before the publication of the report, all of whom expressed dissatisfaction about the process by which the group had been run. The final text of the report, which was hurriedly assembled by email in the two days before the group’s final deadline, was not signed off by the whole group, as even a set of routine meeting minutes requires. In the words of the introduction (page 3):

Stephen Heppell… authored the approach presented here on the basis of contributions by other ETAG members.

Nor does the report welcome the prospect of criticism (page 7):

We are aware that there will be gainsayers too; education suffers from more than enough of them.

At the same time, the report repeatedly asserts the authority of its “experienced group” (page 4) that made up the committee, with its “collective long years of experience” (page 4), which “drew on their own very considerable experience and wisdom” (page 6), having been “chosen to reflect a vast range of experience from across the education and technology sectors” (page 21). This claim to authority replaced any serious consideration of evidence or of divergent opinions. Asserting the truth of what you say on the basis of who you are, rather than what you can say in justification, is characteristic of the fundamentally illiberal position that Tim Lott complains is becoming increasingly characteristic of left wing thought.

While criticising the emphasis laid in the report on the group’s experience, I must emphasise that I do not question that experience itself. Given the lack of evidence of any impact of education technology to date, to which I have already referred, it has to be said that the collective experience of the group is one of failure. Even if such experience is itself valuable, in a situation in which success still lies in the future, experience, however useful it turns out to be, cannot be presented as badge of authority. Proposals for action must be presented with circumspection and must be willing to justify themselves in the face of criticism.

There are several themes within the report with which I agree. For example, there is an important passage on page 11, which is characteristic of much of the work by Professor Diana Laurillard, which argues that at the beginning of a project one must look to the sort of pragmatic imagination that is best encapsulated in the language of engineering, rather than in the language of science, which can only study things that have already happened.

It is not the experience, nor even the expertise of the ETAG group that I deny; it is not the individual members of the ETAG group that I attack; nor all of their arguments that I dismiss. It is rather the way that these individual contributions have been harnessed into a pre-determined set of recommendations. It is above all the tone of the report, produced as a result of a flawed process, subject to the insufficiently examined group-think of a community that has become habituated to a role of advocacy rather than of practical implementation. It is a tone that presents what is in truth the community’s rather tenuous expertise as a badge of unquestionable authority, an excuse for the omission of any serious justification or debate, and the grounds for a general exhortation to something which comes very close to a moral crusade. Alongside a number of thoroughly flawed arguments, it is the report’s techno-zealotry that so comprehensively undermines its credibility.


I have made three criticisms of the ETAG report:

  • that it ignores its remit in seeking to recommend what should be taught and not how it should be taught;
  • that it dismisses the important evidence that shows that the recommendations of the group are ineffective (at least in respect of the question that the group was asked);
  • that the process was intolerant of open debate or any disagreement with the views of the core leadership of the group.

In the end, these are all different aspects of the same criticism. It is that the ETAG report, and most of the activity of the “technology in education” community, represents an ideological, not technocratic position, a position that seeks to change the purpose of education and not to improve its methods. That is why it addresses ends and not means; that is why it ignores evidence; and that is why is does not tolerate challenge and debate.

The alternative

While I believe that it has been necessary to offer a rebuttal of the ETAG report, I do not believe there is any benefit to be gained in continuing the debate about ETAG itself. I have made very similar arguments on this blog for more than three years and have received no considered response from those who support the ETAG line.

The great hope in this otherwise rather dismal situation is the reappointment of Nicky Morgan at the Department of Education. It is not that I have particular views on her politics or her personal abilities, but that the only way to break the intellectual stranglehold that the ETAG report expresses is by government action. In an area such as ed-tech, which I suspect is regarded in Whitehall as somewhat esoteric and specialised, if not positively cranky proposition, the culture of the generalist politician pursuing a succession of short postings is the enemy of effective action.

When Michael Gove decided that more needed to be done with ed-tech, it was natural that he should turn for advice to those figures whom his officials told him were leaders in the field, collectively parroting views which have been accepted as orthodoxy for twenty years. If, in the aftermath of the recent election, ed-tech is now to fall off the political agenda again and we have to wait for another four years for another Secretary of State to express a renewed interest, it is very likely that this future Secretary of State would, in their ignorance, again turn to the same familiar positions as were produced by the ETAG group.

Nicky Morgan is different. She has been able to follow the ETAG process. She asked a series of pertinent questions of the ETAG group and, having read their report, saw that her questions were not answered. She has caught a glimpse, however brief, of the emperor’s nakedness. She now comes back into office understanding that of all the “barriers to the growth of innovative learning technology” that the ETAG group was tasked to identify, the most significant is the views of the ETAG group itself. In this way, the lack of understanding at Ministerial level of the current ed-tech landscape has been overcome. There is no better moment for the introduction a set of decisive ed-tech policies that will support the achievement of government objectives.

There are at least five positive reasons why such a decisive initiative can now be expected.

  1. Given the circumstances of the election, the new government enters office with degree of reforming authority (and indeed expectation) that no government has had since 1997.
  2. One reason why the Blairite ed-tech initiative had such disappointing results is that we did not then have the infrastructural foundations that ed-tech really requires: mobile (which puts us in reach of 1:1 device ratios), cloud (which gives us cheap, easily acquired services and the potential for strong data interoperability), and increasingly easy-to-use user interfaces. We can do things in 2015 that were simply not possible 15 or 20 years ago.
  3. The under-performance of Western education systems, highlighted by the PISA reports, and the relationship of this issue to problems of employment, productivity, social mobility, standards of living and social cohesion (let alone to personal fulfilment) lends real political urgency to educational reform.
  4. The theoretical foundation on which orthodox views of ed-tech have been overlaid—theories that emphasise twenty-first century skills and independent learning—has been very seriously undermined, if not largely destroyed, by the last government’s structural reforms of education in combination with a new generation of educational writers and bloggers that includes Daisy Christodoulou, Tom Bennett, Andrew Old and Robert Peal.
  5. There are important synergies between ed-tech and many of the other political priorities to have emerged from the DfE over the last five years: the requirement for better educational research; the need for more effective and less intrusive methods of monitoring than have been provided by Ofsted; for more accurate and less intrusive forms of embedded assessment in a world without levels; for more consistent pedagogical practice in the classroom; for a reduction in teacher workload and stress; for new curricula, such as technical and vocational education, where these are deemed desirable by key stakeholders.

Ed-tech can contribute to the solution all of these problems. I would go further and argue that it is difficult to conceive realistic solutions to most of these problems without ed-tech.

That is why I am so confident in predicting a new and significant initiative, based on a new conception of what ed-tech involves. It will not lay aside the experience of the ed-tech community, including many of those who were involved with the ETAG group. What it will do is to require this community to engage in a new, more open and more searching debate that will test our assumptions and stimulate a more productive synergy between our different experiences and insights. Such a debate will also include other figures, not traditionally identified with the ed-tech community but who will have important contributions to make in developing a coherent ed-tech policy: people like Tim Oates, who has campaigned for better textbooks (an argument that is largely transferable to digital courseware); Amanda Spielman, who has considerable insight into the contribution that data analytics can make towards better assessment through her reforming work at Ofqual; Daisy Christodoulou and Rob Coe, who are sitting on the government’s advisory panel on assessment without levels; representatives of the technical community for data interoperability that is growing up around the TinCan/Experience API initiative in the United States; representatives of the ed-tech supplier industries, on whom we will rely to provide the missing activity and management software on which progress will depend; as well as including the views of teachers and learners,ed-tech sceptics just as much as ed-tech enthusiasts.

So favourable are the circumstances to the launch of such a significant new initiative that if it does not happen I will join a well-known member of the Liberal Democrat Party in eating my hat.

14 thoughts on “After ETAG

  1. Excellent critique.

    Your point 4 – “The theoretical foundation on which orthodox views of ed-tech have been overlaid—theories that emphasise twenty-first century skills and independent learning—has been very seriously undermined, if not largely destroyed, by the last government’s structural reforms of education in combination with a new generation of educational writers and bloggers that includes Daisy Christodoulou, Tom Bennett, Andrew Old and Robert Peal.”

    I’m not clear if you’re saying that’s a positive or a negative thing.

    • Thank you Juliet. I am saying it is a good thing as I see so-called “progressive” theory as intellectually flawed and unsupported by the evidence, such as it is. There is always a danger of polarisation, of course, and I am not saying the “post-progressives” have got it all right or that all the evidence is all in – but I think it is helpful to break the stranglehold of false orthodoxy and overbearing certainty.

      I am also saying that ed-tech, released from this false orthodoxy, will have an important role in finding the nuanced middle way, where this exists. Although digital ed-tech is layered on top of “native” ed-tech (eg theories of pedagogy) , so you would assume that you would need to sort out the theory before creating the ed-tech, like you would do a requirements analysis before creating any other sort of software or write a book before printing it. But I am saying that at a systemic level, the order of events is reversed. Because ed-tech provides a medium through which pedagogy can be expressed, replicated and evidenced, it will stimulate the advance in education theory by exerting a sort of pull factor, as the invention of printing helped stimulate the Reformation and the Enlightenment: reversing the sequence of events you would look for in the publication of a single book. See my “Stimulating pedagogical innovation” at

      I hope this helps – and thanks again for the comment. Crispin.

  2. Crispin – I find myself agreeing with parts and disagreeing with parts. I think I would start by stating my own position – this is that there are two positions that technology needs to fulfil in schools (remembering that technology is not new) (i) it should look to improve the teaching and learning that currently takes place and (ii) it should look to challenge the existing teaching and learning and challenge us to change.

    My reading of your post is that you are keener for (i) that (ii) and I find the list given by Morgan rather prosaic and focussed in sustaining the existing system rather than asking any critical questions about it. I would like to see a balance in my (i) and (ii) above – for example there is a real place for software which allows for real time tracking of data and reporting – reducing teacher work load but the main drivers of teacher workload (according to Morgan’s own study were Ofsted and government initiatives). In other terms of assessment we have had the example of technology being used to digitise examination papers and send to examiners but not asking critical questions of whether sitting in an examination hall for hours writing on paper is a suitable assessment system for a technologically developed society. Do you know feel that the technology should challenge the status quo as well as seek to make it more efficient?

    I also do not find myself if agreement with your list of bloggers – Christodoulou’s book on myths was the worst sort of polemic and Peal’s similarly uses evidence poorly and makes significant errors in his book. Whilst I admire Bennett’s enthusiasm in setting up ReseachEd I have also heard him say that this own book on research was not all it wished it was. These are not a group I find inspiring or whose ideas I would wish to determine the future of education in the England.

    • Thanks for your interesting comment.

      You say you want to (i) improve teaching and (ii) change teaching. But to improve teaching *is* to change teaching – so the two points are not alternatives. If you want to change teaching other than by improving it, I guess that this means that you want to change the objectives. But there is an infinite number of ways that we could change the objectives of education, so you then need to say how you would change it.

      My view on what changes to objectives are desirable is that the state system needs to do more to develop things like curiosity, teamwork, initiative, originality and perseverance, in senses already well understood in the private sector, but that this must come in addition to and not instead of academic achievement.

      While anybody can make a case for what changes should be made to the aims of education, and while the teaching profession has an important role in proposing such changes, it is for society more widely to approve the changes. Teachers provide a service to everyone else and it is everyone else, not the service providers, who must set the objectives of the service.

      I do not agree that education technology should drive such changes. I understand technology as the means by which we achieve our ends and therefore ought to be the servant of our objectives, not the master.

      You could say:

      (i) that you want to teach students new skillsets as a result of changes in society brought about by technology more widely;

      (ii) education technology enables us to achieve new educational objectives that we would like to have aimed for before but couldn’t.

      In both these cases, the changes must be approved by stakeholders (indirectly represented by government and directly in the form of parents and students).

      I agree that Nicky Morgan’s list of priorities for ed-tech is not exhaustive. In the post above, I have added “improving research” and hinted at a second, “improving the definition of learning objectives” – I think this one would support the sort of wider conversation about the aims of education. Thirdly, and to my mind most critically, I would add “improving pedagogy in the classroom”. But I think that Nicky Morgan’s list is a good starting point and the fact that none of them was addressed by ETAG illustrates their failure to address the remit.

      Nicky Morgan’s own study of teacher workload was just a questionnaire sent out to teachers – so what it showed was how teachers perceive the problem, not how Nicky Morgan perceives the problem. Teachers perceive the problem to be interference by “management” (i.e. by Ofsted & government). But the reason why Ofsted and government are interfering is that standards are too often too low. The easiest way to reduce workload is to lower expectations – so the (predictable) answer that teachers gave to the problem (stop pestering us) is not sufficient. It may well be that pestering by management is not a very effective way of improving standards – and that is why raising standards and improving accountability through ed-tech, rather than by bureaucratic diktat, is an important means of reducing workload without at the same time lowering standards (one hopes while actually raising standards).

      A significant proportion of the teaching profession has a poor understanding of the importance of practice and feedback: levels of appropriate feedback are, in general, very low (see my “Why teachers don’t know best” at One reason for this is that giving appropriate feedback is very labour intensive. Raising the amount of appropriate feedback by automating the more mechanical and predictable types of feedback and centralising the creating of good quality practice activities, are vital ways in which ed-tech can contribute to raising standards without at the same time raising workload. One thing the teacher survey did mention in this respect was preparation time, which is where the availability of good digital courseware, in combination with assignment and sequencing tools, will help.

      All in all, I think the teacher survey got the issue of workload wrong, reflecting teachers’ poor understanding of pedagogy. Good teaching is intrinsically very labour intensive and in order to be sustainable and scalable, it needs where possible to be made more efficient through automation and a better division of labour. Where I do agree with the teacher survey is that endless government diktats don’t help solve the underlying problems.

      We disagree on the merit of my list of bloggers – I find the case that they make persuasive and well evidenced. I have one or two disagreements with Christodoulou, but they are minor. They are:

      (1) that I think she misrepresents Bloom, who does not say that factual recall is the least important aspect of teaching but rather that it is the *first* thing that needs to be addressed in any teaching cycle;

      (2) I have some sympathy with Tom Sherrington (@headguruteacher) that the seven myths places too much emphasis on the acquisition of facts and not enough on the manipulation and application of facts (what is generally meant by “skill”) – but I do not agree with him that the education system as a whole has got this balance anywhere near right – which is why I welcome Daisy Christodoulou’s book as a useful corrective.

      I have no problem with polemics so long as they are backed up with evidence – I think that where there is a falsehood, you do everyone a service by knocking it down. I agree that Tom Bennett’s Teacher Proof has its flaws – I have blogged about this at “Private intuition: public expertise” ( But I think the middle section of the book is valuable precisely because of its vigorous polemic. If I am wrong on my evaluation of Christodoulou and Peal, then show me the reasoned rebuttal. It is not enough to say “that’s all wrong” – someone has to step up to the mark and demonstrate that its wrong, as I have tried to do in this post with the ETAG report.

      Wrapping up what has become rather a long answer, I don’t think it is helpful to characterise people as pro- or anti-status-quo. I see that more as a matter of image and tending to polarise the argument. We all want change. The devil is in the detail, particularly in respect of *what* changes you advocate. And it is by looking at the detail and engaging in constructive debate that people actually start to find common ground and consensus, rather than emphasising tribal differences.

  3. Yes – that does help. I do agree that there should be a shift in educational theory and that we’ve not yet seen it through all the factors you list above and particularly through the failure of educational institutions to separate and define what they mean and also because there is such a huge mismatch between educational professionals and software designers. I don’t want to throw all the progressive babies out with the bathwater, though, so I appreciate the ‘middle way’ you suggest. I can see that adaptive systems could and should lend themselves to a degree of ‘independent learning’: pupils should be able to move on at their own pace and access virtual resources as they do, for example, in online games.

    • Hello Juliet, yes I agree with all of this, including the problematic divide between teachers and programmers. My model of ed-tech sees it as delivering (and tracking) a sequence of learning activities. What those learning activities are (knowledge tests, problem-solving or open-ended creative exercises) and how they are sequenced (by student choice, by teacher, by publisher or by machine) will be determined by what our objectives are and what the evidence shows to work best. I suspect that it will be a mix of them all. There are practical issues, like the fact that too much diversification of individual learning pathways might undermine the social aspect of learning, which is one reason why I think bricks-and-mortar classrooms and physically-present teachers will remain vital.

  4. Many thanks for a more than interesting article and there is much to comment on. But just to state the obvious about the subject related to the F E & Skills sector (my territory), 75% of organisations do not use appropriate technology to simply ‘manage’ the process of teaching, learning and assessment, making the barriers to teaching much harder, learning delivery from another age and assessment, downright archaic. In my experience, the infrastructure rarely has an effective or cohesive means of tracking progress, a platform for learners to access resources 24/7 and what learners access usually isn’t worth the paper that it is invariably written on. The technology in question is simply dynamic ‘tools’. Simple to use, readily available, cost effective ‘proven’ technology. Were this addressed then traditional teaching, learning and assessment would be far more effective to learners used to such ‘tools’.
    The technology in question is used every day by learners, teachers, assessors and dare I say senior management. Back to the article: The evidence for doing far more for a lot less in Teaching, Learning & Assessment using appropriate technology, is readily available by the 25% of organisations that have embraced it and has been for several years. The fact that ETAG hasn’t made use of that evidence is inexplicable but should be incidental to an organisation needing to ‘improve’ and in many cases ‘survive’ ;

    • Thanks for the comment. I am not an expert on the FE sector – this was the special subject of the FELTAG report, similar in many ways to ETAG but its precursor. As you say, solid evidence of what works, even in modest ways, may be more valuable than blue sky visions at this stage. I would be interested in references to the evidence that you mention – especially if we get a chance to revisit the ETAG process.

      For what it is worth, the big problem that I see with using data-driven progress monitoring systems is data entry (search for “Achilles heel” at Which is why I think (perhaps being a bit blue sky for a moment) that really good central management systems ultimately depend on good instructional content that captures data automatically, just like supermarket logistics systems were ultimately dependent on the bar-code reader.

      Many thanks and, as I say, stay in touch if you think you have good evidence of useful practice. Crispin.

      • “good central management systems ultimately depend on good instructional content that captures data automatically, just like supermarket logistics systems were ultimately dependent on the bar-code reader.” Exactly! We are just in the process of inputting our latest data. The data itself is meaningless because it is based on a subjective assessment of whether or not pupils have ‘attained’ an objective. This is then converted to the % of objectives met in order to assign a grade ’emerging, meeting, exceeding’ etc. There are so many things wrong with this that it’s not worth going into it now, but not the least is the amount of time it takes to simply put this data into the system manually.

  5. I will collate something appropriate and send, suffice to say there is an abundance of evidence and if chosen carefully, applies equally well with HE as FE & Skills, ACL etc. articles on all subjects at

  6. Hi, having read the article related to a Jisc Conference from 2012, I must say that in my opinion is somewhat out of date today, particularly as ‘in my opinion already was in 2012, at least in the context of ETAG. What Achilles Heel? To be fair, Smart Phones and tablets were not that common despite the fact that many Learning providers were using them as one of the ‘evidence’ gathering devices in use by learners from which they uploaded to a (good) Learner Management System, which enabled teachers, assessors, management parents, employers etc. to track, advise, mentor, communicate. One Device – One LMS. I, like many others in FE, were using that process in 2007……. Likewise with digitising learning resources the heart of the FELTAG Report, to make a high percentage of learning resources available on-line – or else. It isn’t a mystery, well maybe it is to some of those in charge, perhaps they need to get out more.
    Apologies, no need to publish, but I will send more information separately

  7. I can only applaud and admire the effort and energy you put into this analysis of the report Crispin. Knowing exactly what to expect, I took the view that it would be a waste of time reading it. But it gives me no pleasure to discover I was right.

    I suspect you’ll find the recent post (link below) on Larry Cuban’s blog interesting for the key definition Randy Weiner attempts. Personally, I think the relationship between the industry and the zealots will remain a serious problem which continues to baffle government, until many more serious academics and scholars get interested.

    • Hello Joe, I am sorry I am so slow to respond to your kind comment. I was also very interested in your link to Larry Cuban’s blog, where I left a long message. I very much agree with Randy Weiner’s diagnosis of the problem, though I have some reservations about his proposed way forwards. Many thanks anyway for the comment. Crispin.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s