A summary of my recent contributions to the debate around ETAG
To many on the ETAG committee, I am undoubtedly seen as an awkward and disruptive influence—something like the know-it-all who keeps putting their hand up at the back of the class. They have already changed the medium through which their consultation is conducted, shifting the emphasis away from Twitter (which provides me with the opportunity to challenge poorly justified contributions) to an online form, which keeps submissions private and unchallenged. At the same time, the committee retains in public a stony silence in the face of my arguments, while one prominent member of the committee complains (in a context in which he is clearly referring to myself) of the activities of “trolls, spammers, abusers, and self-publicists”.
Well, as its formal consultation finishes, it is time that ETAG publicly acknowledged the debate and made a serious, substantive response to the criticisms that I and others have raised. Because if they do not, it is increasingly clear that their report will be ignored by government, just as the FELTAG report was effectively ignored before it. This post contains a list of links to my various substantive contributions to the debate, most of which are on other people’s websites.
A critique of the ETAG base document
Most discussions require some sort of stimulus material to focus the minds of respondents and give them questions to answer and such questions cannot avoid containing assumptions. But it is desirable for any consultation to try and minimize the extent of these assumptions in order to avoid prejudicing the responses.
The ETAG base document does the opposite, publishing a set of highly questionable assumptions that that are explicitly stated to be beyond the scope of the consultation to question. So in the first half of my article What ETAG should say, published on this blog, I provide a critique of th ETAG base document.
My policy proposals for ETAG
This is a three page document containing my policy proposals for ETAG. It is reprinted in the second half of the What ETAG should say article (above) – but the link given here will take you to the PDF version.
The policy proposals are based on the assumption that we do not yet know how ed-tech will transform education but it is a safe bet to assume that we will need robust, education-specific technology that will need to be produced by industry. This means that government’s focus should not be on funding supply-side initiatives—a strategy that has been tried so many times before with no success—but rather on stimulating a healthy market for education technology products, doing a certain amount to stimulate demand. Above all, it must avoid perpetuating orthodox, consensual views of ed-tech which have achieved so little but have cost so much.
Rescue-ETAG—say no to FELTAG
Another post on this blog, providing a robust critique of the FELTAG report and warning of the danger of importing the conclusions and assumptions of FELTAG, unexamined and uncriticised, into the ETAG process.
From FELTAG to ETAG
Article first published in Terry Freedman’s Digital Learning magazine, and subsequently republished on his blog. A short summary of my disagreements with the FELTAG/ETAG assumptions and a call for the members of the committee to enter a genuine debate, which will lead to a consensus that will stick.
Big data, tools and learning design
I agree with Bruce Nightingale (@brckngh) that the harnessing of data lies at the heart of the contribution that technology will make to improving education. But as Bruce points out, the experience of the InBloom project in the US shows that this is not a shoe-in, not even a slam-dunk. At the moment, we have very little useful data in education. We need the tools to generate this data, the interoperability to capture and share the data, and a data-handling regulatory framework to enable that sharing to occur without abusing the privacy of students.
I had a conversation with Professor Diana Laurillard (a member of the ETAG committee) in the comments section of a Guardian article that she wrote, which mainly concerned the production of the tools that we need to generate and to analyse this data.
This conversation was then continued on the blog of Kristen DiCerbo, research scientist at Pearson, prior to a webinar that Kristen gave on her recent paper “Impacts of the Digital Ocean on Education”.
I agree with Kristen about the importance of specialised technical and analytical expertise in the development of ed-tech (something that Diana Laurillard acknowledges but to which, in my view, she gives insufficient emphasis in her headline message). I also share the vision, presented by both Laurillard and DiCerbo, of the importance of teachers being able to share learning designs and of the potential to create adaptive learning pathways based on data analytics. But I disagree with DiCerbo’s assumption that we live in a “Digital Ocean”, in which the data either exists in sufficient quantity or quality, or, to the extent that it does exist, that it resides in a single pool. The data is held in diverse, secure silos and therefore needs to move to where it can be legitimately exploited. In addressing this requirement, interoperability, working within a proper regulatory framework is critical.
On the question of technical interoperability, there is a long history here of work done in what might loosely be called “the SCORM community”, centred on technical interoperability standards adopted at the end of the 1990s by the US Department of Defence, inherited in large part from the Aircraft Industry CBT Committee (AICC). While the mention of military training tends to make the hairs on the back of many educationalists’ necks bristle, in fact much military training has historically not only been well-funded but has often been based on game-like simulations, with a strong emphasis on problem solving and teamwork—features that those same educationalists profess to value. A thorough understanding of the history of SCORM and the development of its most recent descendent, TinCan, is therefore essential background, if you are to understand the problems and pitfalls in this area and avoid repeating the same mistakes.
Why it doesn’t help to focus on student control
None of the argument above has touched ETAG, which seems to have been spending its time wondering how students can analyse their own learning data—a scenario that surely comes from the pages of Alice in Wonderland.
I responded at some length to a piece by Simon Knight and Simon BuckinghamShum, who argue for student control of their own data. As well as the key point about student control of data, the two Simons also argue for a “what works clearing house”, where people could look up technology-enhanced pedagogies that had received some sort of seal of approval. My objections to this idea are:
- if the criteria for inclusion in the clearing house were rigorous, there would be nothing listed;
- if not rigorous, all the clearing house would do is give pretended authority to quack prescriptions.
The way to establish “what works” is by rigorous quantitative evidence; the way to replicate such conclusions is by the production of effective ed-tech products that encapsulate the process-dominated aspect of the pedagogy; and the way to share views of “what might work” is by contested debate, not by the pronouncements of a new magisterium.
The significance of the teacher in the classroom
In article on the TES site, Dominic Norrish argued for the benefits of becoming a “tech-savvy teacher”. I challenged the view that it was the teacher who represented the main lever for change. What followed was a detailed and constructive conversation, which, Dominic was kind enough to say “has helped evolve my thinking massively”
The evidence for the successful use of ed-tech to date
Mike Cameron contributes a sceptical piece that questions the value of government prescriptions and techno-hype. If we ask
So what will all this technology be doing [in 10 years time]? The answer to that question is that we don’t really know yet.This means that most of the promises currently being made about the influence of technology will not happen. Mike suspects that the real influences will focus on the need to save money and will be driven by the market.
Sam Shepherd, who admits to having been convinced by the techno-zealotry of the mid-2000s, posts a feet-on-the-ground view of how the current generation of ed-tech is really used in today’s classrooms, providing a useful reality check to those on the ETAG committee who take it for granted that ed-tech is already transforming education.
Nevertheless, in my response to Sam’s piece, I do not agree that the limited achievements of ed-tech to-date prove that that is all that technology can do. The correct response is not to abandon hope but to do things better—specifically by producing the technology that is going to make an impact in the classroom.
Sam’s sceptical position is supported by Joe Nutt, previously Senior Education Specialist at RM and principal consultant at CfBT, who is kind enough to assess my piece, Rescue ETAG—say no to FELTAG, as “one of the most well informed and insightful commentaries on this issue I have read in a very long time”, commenting on “the complete and utter disconnect between investment and educational benefit, for about a decade”.
In response to the same article, Roger Broadie argues (in my view incorrectly) that we need not give any weight to the lack of quantitative evidence of the effect of ed-tech on learning outcomes—if you follow the link, note that there are two rounds to the conversation.
The most authoritative statement of the evidence for the impact of digital technology on learning outcomes has been made by the 2012 report by the Education Endowment Foundation, The Impact of Digital Technology on Learning. In its Executive Summary, this states that:
Taken together, the correlational and experimental evidence does not offer a convincing case for the general impact of digital technology on learning outcomes.and
Research findings from experimental and quasi-experimental designs – which have beencombined in meta-analyses – indicate that technology-based interventions tend to produce just slightly lower levels of improvement when compared with other researched interventions and approaches.
Until the significance of these findings is acknowledge by ETAG, they cannot be judged to have even to have reached square one. It is clear that this problem with evidence is one that has been heard by Matt Hancock, who is said by Dominic Norrish to have left the recent launch of the Education Foundation’s recent Technology in Education report
commenting on the need for evidence (cutting through all the anecdotes and war stories…)The problem that ETAG has, on which the whole of their strategy will now hinge, is that there isn’t any evidence other than “anecdotes and war stories”. This doesn’t mean that we cannot move forwards and a good report cannot still be written—but it can only be written if everything that FELTAG and ETAG have done up to now is torn up and thrown in the bin, so that they start again from first principles.
No to interventions on the supply-side, focusing on hardware
Ben Barton, CEO of Zondle, published a contribution emphasising the need for regulation to ensure 1:1 computer ratios throughout the system and digitisation of all exams, and to speed up the replacement of our bricks-and-mortar estate with online-only provision. I disagree fundamentally with this approach and commented on Ben’s post, which Ben has been kind enough to publish, despite being urged by Bob Harrison, doyen of ETAG, to:
Just ignore it Ben…otherwise you risk the personal abuse and online bullying which will follow! refuse to engage with those who cannot see there is no place for abusive and personal attacks. They disqualify themselves! I will chat with anyone EXCEPT those who resort to personal abuse who undermine their arguments and disqualify themselves!Followed shortly afterwards by a “You know who!” accusation to the effect that I am an angry troll who is not very bright and whose arguments amount to.nothing more than sophistry and point-scoring.
The place of entitlement
Despite my opposition to regulatory approaches in general, and having harrumphed at Mark Chambers last night about ETAG’s clear intention to introduce an entitlement to online learning (a proposal I continue to oppose), I have modified my position on entitlement since making my first comment on Ben’s piece.
I now think it would be helpful if government could announce an entitlement, to be introduced in September 2017, for all students above a certain age (Y5 perhaps) to have access to a digital device in class, where this is judged by their teachers to be beneficial to learning. This entitlement would be logically equivalent to giving teachers the right to teach classes with 100% 1:1 access to digital devices. Although I am in general opposed to government-created specs, in this case the entitlement would need to be backed by a minimum spec for what is meant by “a digital device”. The entitlement should be monitored by OFSTED, who otherwise should be cautious about making prescriptions about how those devices should be used.
I still hold to my position, expressed in the first comment on Ben’s post, that:
- we are headed for 1:1 by 2017 in any case;
- access to hardware is not sufficient, as we need appropriate software to run on those devices.
The first point makes the declaration of an entitlement a little like St Exuperé’s Le Petit Prince, in which the king commands the sun to set but only when the sun is setting anyway. He is a pragmatic king who only issues orders “when the conditions are right”.
This may be what politicians do most of the time. But that doesn’t mean that it does not serve some purpose:
- it serves a purpose of equity, ensuring that even the most disadvantaged students have access to the same technology that the majority do;
- it makes a political point, much more newsworthy than any of the proposals that I have made about interoperability, and encourages both industry and the profession (whose inertia in these matter we all regard as a problem) to respond early to the existence in schools of 1:1 computer ratios;
- it encourages the development of the market for education-specific software in anticipation of the imposition of the entitlement.
I see this last point as being critical—and a key reason why any entitlement should not be introduced too quickly. One of the main problems with Curriculum Online (Charles Clarke’s 202 digital ed-tech voucher scheme) was that too much money was thrown at the industry too quickly, creating waste and large quantities of shelf-ware. Don’t rush the foreplay: much of the benefit of the entitlement will occur before it is introduced.
It is also very important that the entitlement is not to “online learning”, BYOD or any specific pedagogy. It must be left up to teachers and suppliers to work out between them “what works”. But that beneficial, market-driven process may well be helped if the basic prerequisite—access to hardware—is publicly guaranteed.
Reblogged this on The Echo Chamber.
Pingback: researchED 2014: my plan for the day | Secondary Source