I already know what my new year’s resolution will be. As well as losing a stone in weight (the same resolution every year), it will be to stop writing almost exclusively on why education technology has so far failed to transform education, and to focus more on arguing how education technology will transform education, when it is properly implemented. As the song has it:You’ve got to accentuate the positive
Eliminate the negative
Latch on to the affirmative
Don’t mess with Mr In-between.
The predominantly negative copy of 2012 has been no more satisfying to write than I imagine it has been to read. But it has been necessary. It is impossible to make progress with a cogent argument for how education technology will transform education while most of the community accepts as self-evident half-baked notions of “independent learners” and “21st century skills”, believes that creativity is possible without knowledge, or that testing is a dirty word. Before making a start on constructing the new you need to demolish the old.
That will be my resolution on 1st January—but for the last few days of 2012, I will follow the prayer of St Augustine (“Lord make me chaste but not yet”) and take one last swing with the old ball and chain.
The bubble is a fashionable idea (or investment) which become ever-more popular mainly because of its existing popularity. This self-inflating dynamic allows bubbles to expand at an exponential rate, floating away from any solid ground of reality, their substance becoming ever thinner until they eventually pop.
Even though bubbles are deceptive, their existence may nevertheless be significant. The first was the South Sea Bubble of 1720—an investment scam based on a company to which the British government had granted a monopoly to trade with South America, even though war with Spain meant that trade was impossible. Although the company shares were sold on a false prospectus, the scheme was popular because the proposal that money was to be made in overseas trade was plausible. The subsequent history of British global commercial expansion in the eighteenth and nineteenth centuries was to prove that this perception was justified in the long-term. A similar point could be made about the dot-com boom of the late 1990s: although a lot of money was lost in the short-term frenzy, the perception that the internet was going to transform business was fundamentally correct.
In describing in some detail three prominent ed-tech bubbles, I shall not only cover:
- a bit of background
- and why the bubble will pop;
- but also what are the pre-requisites required to achieve a more substantive development
- and what might be their long-term significance.
Massive Open Online Courses (MOOCs)
Of all the bubbles listed in this post, only MOOCs really deserve the label in terms of attracting real investment. After 160,000 students enrolled for Sebastian Thrun’s course on Artificial Intelligence in the autumn of 2011, three new MOOC-delivery platforms were established in 2012: Udacity was set up by Thrun with more than $15m venture capital; Coursera was set up by Stanford with $16m of venture capital; and edX was set up by MIT and Harvard with $60m funding provided by the founders. In December 2012, a consortium of UK universities led by the Open University declared its intention to set up FutureLearn to rival the US initiatives.
The phenomenon has aroused excitement, not only ed-tech circles but also in the mainstream press. The New York Times declared that 2012 was “The year of the MOOC” (2 November 2012), while the Economist concludes that “Online courses are transforming higher education, creating new opportunities for the best and huge problems for the rest” (22 December 2012).
Why the bubble will pop
The following is adapted from a comment I left on the Economist article.
MOOCs do not work, either commercially or pedagogically.
much of what’s being lauded as ‘revolutionary’ simply involves videotaping lectures and putting them online
As the Economist article acknowledges, MOOCs do not yet have a business model. The fact that they are free is nothing to get excited about – it is easy to give things away until the money runs out.
When it comes to pedagogy, the article is hopelessly optimistic. It skates over the drop-out rate (which is in the order of 90-93%), blaming this on the lack of a proper qualification (which students will discover at the end of the course, if they did not realise it before they start). It should blame the drop-out rate on the poor pedagogy, which is what they discover during the course.
The Economist article states that thatMOOCs are more than good university lectures available online. The real innovation comes from integrating academics talking with interactive coursework, such as automated tests, quizzes and even games.
But, like the Khan academy before it, “lectures available online” is predominantly what they are. As Audrey Waters writes in a characteristically well-researched post, “much of what’s being lauded as ‘revolutionary’ and as ‘disrupting’ traditional teaching practices here simply involves videotaping lectures and putting them online”. Audrey has attended 15 MOOCs in 2012, so she should know.
The lesson of the UK Open University and University for Industry is that while distance learning addresses the problems of the isolated learner (predominantly the adult in the workplace), it does not come cheaper than face-to-face delivery owing to the need for expensive one-to-one tutoring support.
At a speech at Online Educa Berlin in November, Gary Matkin, Dean of Continuing Education at the University of California, commented that many were signing contracts with with the MOOC companies that were not compatible with their status as leading universities, in that they were losing control of the quality of the courses that they were agreeing to certify.
Robert Cummings, Assistant Professor of English at the University of Mississippi said that many universities were getting into these contracts due to the advocacy of individual faculty members, who were motivated by a combination of vanity and a desire to publicise their own books. He summarised, “Everyone wants to jump on-board and no-one is quite sure what they are jumping onto”.
Before jumping on-board, the MOOC-enthusiastic universities should consider the words of William Cory, Assistant Master of Eton, who wrote in 1861:You go to school at the age of twelve or thirteen and for the next four or five years you are engaged not so much in acquiring knowledge as in making mental efforts under criticism..
If this is true of schoolboys in 1861, how much more true should it be of undergraduates in 2012. And yet those following MOOCs have no opportunity to make mental effort under the criticism of anyone except their peers—i.e. no-one who can relied on to have domain knowledge superior to the learner.
an academic education is not equivalent to a trip to the public library, digital or otherwise
Diana Laurillard, Professor of Learning with Digital Technologies at the London Institute of Education, highlights the same misconception in a book published earlier this year:There is a danger that technology could undermine formal education…Arguments against formal education are now current again but, uninformed by any understanding of the theory of teaching and learning, they plunge us back into traditional approaches. Technology opportunists who challenge formal education argue that, with wide access to information and ideas on the web, the learner can pick and choose their education – thereby demonstrating their faith the transmission model of teaching. An academic education is not equivalent to a trip to the public library, digital or otherwise. The educationalist has to attack this kind of nonsense, but not by rejecting technology.
The key challenge is scalability. Peer mentoring (discussion groups and grading each other’s essays) might be a good ingredient in a broader mix, but without the long-stop of proper tuition, it doesn’t offer a sufficiently authoritative and informed conversation. When none of the peers is an expert, there is too much risk of misconceptions and bad habits becoming established within the cohort.
The staple that is required to handle large numbers of students successfully is machine-instruction, blended with expert tuition. Machine-instruction is what the Economist refers to in passing as “automated tests, quizzes and even games”. The critical aspect is not the “testing” itself but the analysis of the test results and the ability to offer appropriate feedback, guidance and adaptive instruction. In an environment which blends digital and human instruction, the learning analytics layer should produce recommendations for human intervention. You could have a seminar (maybe delivered online), with its topic, learning objectives, participants and tutor all automatically selected by a data-driven learning management system.
until now education technology has been regarded by the mainstream press as a backwater, inhabited by starry-eyed visionaries
These are technologies which have not yet been developed, largely because education (at least in Europe) is funded by governments giving money to teachers and academic educationalists, who are very good at spending it on their own research projects but which have been disconnected, if not actively hostile, to the industry responsible for the supply of education-specific software and other technology.
The first (and not to be underestimated) achievement of MOOCs is to get the New York Times and the Economist interested in education technology. Until now, however crazy government policy was in this area, it has been regarded by the mainstream press as a backwater, inhabited by starry-eyed visionaries and not by serious investors. The venture capitalists who have backed Udacity and Coursera may well get their fingers burnt—but at least they or their successors will bring some feet-on-the-ground business analysis to the task of finding out what went wrong.
The failure of MOOCs will concentrate minds on what are the prerequisites for success. In the long term, we can hope that the movement will lead to proper R&D that is commercially-funded and responds to market requirements.
Learning analytics (which I take to be synonymous with Education Data Mining) has been attracting significant interest over the last couple of years. The Society of Learning Analytics Research (SoLAR) has been holding annual conferences since 2010; in 2011, the New Media Consortium’s (NMC) annual Horizon Report for emerging technology in Higher Education predicted that learning analytics would be a key technology for education in 4–5 years and in 2012, it reduced the timescale to 2–3 years; in October 2012, the US Department for Education published a significant briefing paper on Enhancing Teaching and Learning through Educational Data Mining and Learning Analytics; the US National Science Foundation and the European Commission are both investing in academic research into learning analytics; and in the commercial sphere, Pearson ad McGraw Hill are both investing in learning analytics and adaptive learning systems.
analytics is predicated on “big data” but in education, big data will not exist until we sort out the current failure of interoperability
Learning analytics aims to apply to learning the same techniques that are now used by online companies to target their online marketing. By spotting patterns in the data produced by students’ online learning activity, learning analytics systems should be able to help:
- predict student progress;
- inform adaptive learning strategies (sequencing digital learning activities or recommending human interventions);
- profile a student’s current capabilities;
- automatically group students, depending on their learning needs;
- identify the most effective learning strategies in different situations;
- aggregate and present complex data in ways which helps administrators, teachers and students manage instructional processes.
Why the bubble will pop
Being a “research bubble” and not an “investment bubble”, it is unlikely that the failure of the current round of predictions will be so obvious as I suggest it will be for MOOCs. It is more likely that the current round of projects will finish, everyone will pat each other on the back, bank their research grants, add another project to their CVs, and move on. If you doubt that the concept of the “bubble” is applicable to the academic world, consider carefully the appeal made by Professor Erik Duval to his academic audience at the recent ITK conference:So hey, one reason to try and take notice is you may be able to apply for some funding and do some work on learning analytics, even though, you know, you may not be completely sure what it is.
The reason why that learning analytics will not deliver on its promise is that analytics is predicated on “big data”. In education, big data does not yet exist and will not exist until we sort out the current failure of interoperability. This is an argument which I have made at some length in an earlier post, Learning analytics for better learning content.
The crippling effect of poor interoperability on learning analytics was illustrated by the first of the SoLAR online webinars on learning analytics, given by Chris Ballard, who presented the system developed by Tribal for use in Higher Education. This produces a single output “at risk” metric for HE students, based on five input metrics, shown in the screenshot below.
In the second screenshot, a fictional student, Juliana McWilliam, is categorised as being “at risk” in all of her five current modules.
Such an “at risk” register may be of some use in a high-volume education system in which students have little personal contact with lecturers—but I find it hard to imagine that in any halfway-decent university, the fact that Juliana was struggling would not already be well known to her supervisors. Even if this data were useful, the metric produced is simplistic and one-dimensional. There is no attempt to profile Juliana’s different abilities, to suggest what sort of interventions (other than a one-size-fits-all kick up the backside) would be appropriate, or to gauge the success of different instructional strategies. If this is learning analytics, then it is clear that we at the stage of taking the first, faltering steps.
The problem in the case above does not lie in the modelling being done by the developers at Tribal but in the lack of data which they can access. The lack of data comes down to the lack of interoperability standards which would allow semantically meaningful data to be harvested from educationally significant activities.
The video of Professor Duval’s presentation of learning analytics illustrates exactly this point. At 16:13, he shows the metrics being tracked, which covers the extent to which a student is using a variety of blogging and discussion tools. This is bit like marking an essay by its length. Learning analytics only works as a layer in a broader architecture in which learning activities are making educationally-significant judgements about student performance, and passing those judgements on in the form of semantically meaningful data.
Erik Duval himself recognises this problem. Having listed all the quantitative metrics that the current systems are tracking, he comments (21:31):…and that’s sort-of interesting…but of course, what we really like to track is how well they learnt, and that’s much more difficult.
From the perspective of learning analytics, creating good input metrics is more than difficult—it is out of scope.
Learning analytics is critical to the business of education: to that extent, the learning analytics community has got it right. It is not that the advocates of learning analytics are wrong—it is that, like the advocates of MOOCs, that they are premature.
If the organisations currently funding research on learning analytics are sufficiently self-critical, then they may even come to ask themselves why the projects they are currently setting up eventually fail. If they do ask themselves that question, then we may start to see money starting to be invested more wisely in five years’ time.
However, there is at least an equal chance that the funding organisations will draw what would appear to be the safer conclusion from a future failure of research: that analytics is not applicable to education. Such a conclusion could set back the progress of education technology another decade.
Open Education Resources
According to Wikipedia, the term “Open Education Resources” (OER) was first formally recognised by UNESCO’s 2002 Forum, which was held in response to the Massachusetts Institute of Technology Open Courseware project. This was initially funded by the William and Flora Hewlett Foundation, the Andrew W. Mellon Foundation, and MIT itself. Costing about $4 million a year, the MIT Open Courseware project is scheduled to run out of funds in 2014.
The OER movement has spawned many other initiatives. The Khan Academy was established in 2006, funded principally by the Bill & Melinda Gates Foundation and Google. In 2008, the Cape Town Open Education Declaration urged governments around the world to make education resources available free of charge. Many responded, not only aspiring to make education available to developing countries, but also perceiving that OER represented a cost-effective means of providing learning resources to their own education systems. In 2003, the UK government approved the abortive BBC Jam project, worth £150 million; [deleted text] the European Commission has been prominent in funding OER projects, announcing at a recent Ministerial conference in Norway its intention to launch a major new strategy, “focused on the use of ICT and open educational resources (OER) to enhance education and skills development”, by mid-2013.
in nature, weak animals die but in a government-funded OER ecosystem, useless resources that nobody wants survive and multiply
The “open” in OER is defined by Wikipedia “freely accessible, openly formatted and openly licensed”. But there is no such thing as a free lunch. OER has absorbed significant amounts of funding in what is still a very immature market for digital content.
Why the bubble will pop
OER represents a bubble for the same reason as the MOOCs that grew out of the same movement: the quality of the resources themselves and the pedagogies they represent are poor. A document released by the JISC in support of a UK HE OER project, explains that resources “can take the form of text, images, audio and video, and may even be interactive”. The fact that interactivity (so essential to the process of learning) is claimed as a rare bonus reveals the dreary truth, that the vast majority of resources are expositive. The assumption made by such programmes falls into the naïve fallacy highlighted above by Diana Laurillard that education is about the dissemination of information. It is the same fallacy that is promulgated by the original Cape Town declaration, when it decares thatEducators worldwide are developing a vast pool of educational resources on the Internet…creating a world where each and every person on earth can access and contribute to the sum of all human knowledge.
The development of online information is the achievement of the World Wide Web: the job of educators is not to add yet more information but to manage the interactive process of learning how to use that information.
The failure of this type of initiative to achieve, in the words of Diana Laurillard, “any understanding of the theory of teaching and learning”, is concealed by the assumption of a certain moral superiority amongst the OER community. OER is supposed to save the honest practitioner from the piratical depredations of the commercial salesman, while, according to David Wiley, founder of MIT Open Courseware, quoted in the JISC paper:Openness is really the only means of doing education [because] if I’m not sharing what I know, if I’m not giving you feedback, if I’m not engaging in this give and take with you there is no education.
This piece of sophistry confuses a decision about how you want to fund a service with the need for good feedback within a learning process—a critical element of any instruction. This conflation of business models with pedagogy is a surprisingly popular piece of muddled thinking: many people talk about “learner generated content” as a subset of OER, when what they are really referring to is student product: artifacts which are created by a student as an output of an instructional process but which are neither intended nor useful as an input for new processes.
The lack of commercial investment in many OER resources is aggravated by the fact that, as no-one has to buy anything, there is no automatic quality filter. In nature, weak animals die; in an efficient market, companies producing poor quality products go out of business; in a government-funded OER ecosystem, useless resources that nobody wants survive and multiply. This problem is recognised implicitly by the briefing paper produced for the European Ministerial meeting in Oslo in December 2012 (written in typically opaque Eurospeak):The very large heterogeneity of Open Educational Resources available and types of contributors to it, is leading to an increased difficulty in adopting quality standards and quality assurance tools. For the users identifying quality resources and/or sources may be a very difficult constraint to overpass. It is important establishing quality parameters for OER, as well as assessment and certification processes which can lead to the validation of skills acquired through OER.
This extract not only supports my assertion that OER is being undermined by poor quality. It also demonstrates why the OER bubble is such a tough one to pop—that consummation that in this case is so devoutly to be wished. Instead of being funded by venture capitalists, who will at least learn quickly from their mistakes, OER is increasingly funded by governmental bodies, whose very existence depends on the continued funding of these programmes. Instead of recognising the error of funding supply-side initiatives, these bodies propose to introduce even more government-funded programmes, introducing “quality parameters” and “assessment and certification processes” that will validate skills, not for their own sake, but for the fact that they were “acquired through OER”.
The state funding of OER is nothing short of a racket. A majority of those who are sufficiently closely involved to know what is going on are themselves implicated—this includes most of the Higher Education community who, under the pretence of being consumers of useless OER are in fact being funded to produce useless OER (or conduct associated research). That is the reason that no-one (not even the academics who advise governments under the pretence of being ed-tech experts) is prepared to come clean and admit that these programmes have achieved next to nothing and should be abandoned.
the state funding of OER is nothing short of a racket
OER cannot be isolated from a much wider and more troublesome “bubble”: the tendency of all public sector projects that are not subject to rigorous political control to grow, regardless of whether or not they are productive. The OER bubble will only ever “pop” for the reason that this wider bubble will pop: the need to implement radical austerity programmes in order to avert the real and present danger of sovereign bankruptcy.
What is pernicious about OER is its funding by government and misguided charitable foundations. There is nothing wrong with OER itself, so long as it’s production is not funded externally.
Unfunded voluntary production may be expected to occur naturally: OER may be produced by commercial organisations in order to achieve publicity, by enthusiasts for personal kudos; and by practitioners in order to save time through collaboration with peers.
In none of these cases will the production process attract substantial investment and for this reason, no-one should look to OER as a means of addressing significant deficiencies in current forms of education technology. The critical—and still largely missing—characteristic for effective learning content is interactivity (see my earlier post, What do we mean by “content”?). This will be provided by commercial software, developed in response to a healthy ed-tech market. One particular type of educational software that needs to be developed will be high-level authoring tools that embed complex interactivity and good pedagogical design. Once the heavy lifting has been done by commercially funded R&D, users of such software will be able to share the content that has been authored by such tools, confident that this content will at least be sound technically and pedagogically. Community networks will have no trouble recognising academic quality through the management of the reputation of the academic (and frequently volunteer) authors.
I have made the argument for high-level authoring tools in an earlier post, Learning analytics for better learning content. That post also highlights an associated and fundamental prerequisite for any content that is to be used productively in formal education: technical interoperability.
The significance of the government funding of OER programmes is entirely negative, discrediting education technology by flooding the market with poor quality materials and undermining the commercial incentives required for the emergence of the ed-tech market.
There are other intellectual bubbles that I could list such as e-Portfolios, serious games, and adaptive learning. Most of these are similar to learning analytics: they contain good ideas which are still premature, principally because we lack genuine interoperability and a market that allows the development of robust, education-specific technologies.
e-Portfolios will not work without:
- creative tools that can interoperate with learning management systems to automate the assignment of creative activities and the handling of creative product through a complex cycle of review, redrafting, assessment, reflection and sharing;
- competency definitions against which student-created artifacts can be assessed.
Serious games will not work without:
- run-time protocols that allow games to be assigned and launched automatically, allocating students to particular groups and roles;
- run-time protocols that allow outcomes to be reported automatically to learning analytics systems.
Adaptive learning systems
Adaptive learning systems will not work without:
- protocols that clearly define and allow the automatic launch of all kinds of learning activity;
- learning analytics systems which are capable of tracking student learning through an instructional process, creating the predictive profiling information required to control selection of appropriate learning activities.
In all of these cases, what really matters is:
- the emergence of an ed-tech market that will address in the correct order all the different pieces of this complex jigsaw;
- ending interference by government funding programmes in the development of technologies that governments do not understand and that in any case require risky, entrepreneurial investments;
- instead, establishing processes that challenge uncompetitive markets and encourage the industry to develop appropriate interoperability specifications.
The withdrawal of government interference is already happening in the UK, which puts the UK in a particularly strong position to move on to the next step.
better interoperability is the next affirmative step that we need to take
Interoperability is not only necessary to the emergence of a true ed-tech market—different types of interoperability, as listed above, are also as essential prerequisites for many types of development, such as learning analytics, e-portfolios, serious games and adaptive learning.
It is therefore clear from this out-with-the-old tour of ed-tech bubbles that better interoperability is the next affirmative step that we need to take. It will therefore be to the theme of interoperability that I shall return (brimming with positivity) in the new year.
|The problem with “Technology Enhanced Learning”, criticises this acronym and the academic-led initiatives that are commonly subsumed by the term. A shorter version of the analysis of MOOCs is included.|
|What do we mean by “content”? analyses the use of this poorly-defined term, which often excuses low-grade, non-interactive, information-bearing resources. The post makes the argument that there are many types of content and the sort we really need bears not information but activity.|
|Learning Analytics for better learning content explores the preconditions for enabling learning analytics to be used to improve the quality of learning content (one of which is commercial provision of content)—concluding that interoperability is the most important.|
|Home page, with a full listing of posts on this blog.|
 Quoted in Resources for Learning, by L C Taylor, Penguin Education, 1971, p. 22. L C Taylor takes the quotation from Anthony Chevenix-Trench, the Public Schools, in Peter Blander (ed.), Looking Forward to the Seventies. Smythe, 1968, p. 76.
 Teaching as a Design Science, Diana Laurillard, Routledge, 2012, p. 4.
[7—deleted text] “while the Higher Education Funding Council for England has funded an OER programme worth £50 million; “. I have been told by David Kernohan, who ran this programme, that my valuation of this programme was inaccurate, that “they didn’t even spend 15 million” and this was on distribution infrastructure and not content. See http://edtechnow.net/2012/12/29/moocs-and-other-ed-tech-bubbles/#comment-924.