A presentation given to an Ad Hoc group in ISO/IEC SC36, responsible for scoping future standards work for digital learning content
Learning content is a divisive concept. Over the last few years it has become increasingly fashionable to criticize “content-driven” systems as encouraging transmissive or instructionalist styles of teaching. Ian Usher from Buckingham County Council reported in 2008 that “the best work we’ve seen within our Moodles in Buckinghamshire hasn’t come from great swathes of pre-produced content but from interactions…between learners and other learners (with teachers in there as well)”. This echoes a 2006 article by Stephen Heppell stating that “Content isn’t king any more, but community might just be sovereign”.
There are two questionable assumptions that lie behind this now established orthodoxy:
- the assumption that content and community are opposed to one another;
- the assumption that we know what we mean by “content” in the first place.
The following presentation argues that the problem with concept of learning content is not that it is pedagogically flawed—but that it is misunderstood.
SC36 is the committee of the International Standards Organisation (ISO) which deals with technology in learning, education and training (LET). I gave this presentation to an ad hoc group in SC36 which is scoping future work on standards for digital content, currently the preserve of specifications like IMS Common Cartridge, IMS Question and Test Interoperability (QTI) and SCORM.
One of the most fundamental questions that in any scoping exercise is surely the Socratic question: “what is x?”
There is not much doubt as to how the term is used. We use it to refer to PowerPoints, Word documents, PDFs, graphics, audio, video, animation, and layout aggregations like HTML and ePub.
Trying to generalize this type of usage into a single term, you might come up with something like “expositive media”. “Expositive”, if you haven’t come across it before, means “for the purpose of disseminating information”.
So my question in this presentation is not so much “how do we use the term ‘content’?” but “how should we use the term ‘content’?”
To begin at the beginning, you would say that “content” was something that was contained in a container—like the contents of a bag.
We use the term in the same way for what is supported by an infrastructure. Even though the metaphor is somewhat mixed (things belonging on and not in an infrastructure), the fact is that in the English language, “content” is the only collective noun we have for for the clients of an infrastructure.
Whether we are talking about containment or infrastructure, the common point is that content is a relative term. There is no point in talking about “content” with having a good idea of the answer to the question “content of what?”
Something might be contained by several things. Both containers and infrastructures can be nested: the intermediate members of this series of Russian dolls both contain and are contained. Though the analogy of the Russian dolls is not perfect because each contains only a single thing…
…while generally the whole point of a container or infrastructure is that they contain or support many things.
In the digital environment, we are already very familiar with the importance of nested infrastructures. It could be said that computer hardware provides an infrastructure which supports operating system content (the content in this case normally being singular).
The operating system is itself an infrastructure which supports a wide range of software applications.
If you take a web browser as one of those applications, this too is an infrastructure which supports the display of a wide range of web content.
And an online app like Google Maps is also an infrastructure, supporting a wide variety of map content.
So when you look up a route to work on your smartphone, you are depending on a complex series of nested infrastructures—and at each level of that hierarchy, a different infrastructure defines a different type of client content.
In the same way, we can expect to find many different types of content in an ecosystem for education technology.
But although we recognize many different content formats, we currently recognise only general type of leaning content. Our ecosystem is flat.
It only contains expositive media.
And most current management systems—LMSs and VLEs—do little more than distribute and host that expositive media.
In an age of cloud computing, no-one really cares about the location of data any more. This guy seems to be making a phone call from up a mountain—but he might just as well be searching his contacts or checking Google Maps to find out how to get to work. In a internet-enabled world, it is the management of learning that matters, not the location of data.
And as any educationalist knows, learning does not proceed through the dissemination of information. Otherwise, all we would have had to have done would have been to lock children up in a room with a good quality encyclopedia. That would have been just too easy. Learning proceeds through activity (we learn by doing) and it is the activities which reference (and require the manipulation of) information.
So not only have we ended up with a flat ecosystem, but having defined only a single type of stuff that is supposed to make up the learning process, we chose the wrong type of stuff to represent as the content of a learning process. We chose information, when it is activity that matters.
Activity can be handled traditionally. There are a lot of computers in this picture: people are looking up information and maybe writing notes—but the central, organizing activity is a traditional classroom discussion.
Or the activity itself can be digitized. This girl might also be participating in a discussion, this time with remote participants; or she might be playing a multi-player game (a sort of structured conversation), exploring a simulation or completing an assessment.
It is not that there is anything wrong with the traditional off-computer activity: in a blended learning environment, we need both traditional and digital activities. It is just that there are particular advantages in using digital activities, which we are failing to exploit.
Digital activities are delivered by digital services—mobile apps or web-services, for example. So the service—maybe a testing engine—is itself an infrastructure that can deliver a single type of activity and will deliver many different instances of that activity, each fitted to a different learning context. As before, each different activity will reference a different set of expositive media. We are now seeing the kind of useful hierarchy of nested infrastructures that I discussed earlier.
The red, horizontal line no longer represents a line of progression but shows that activities generally require contextual input data and produce outcome data, such a scores. The management of what in the SCORM tradition was called runtime data requires central management by a different infrastructure (you might call it a Learning Management System or what ADL’s Tin Can is now calling a Learning Records System) which you might place at a higher level to the instructional service responsible for delivering the learning activity.
We are starting to develop nested infrastructures. Our ecosystem is no longer flat.
I remember an IMS conference in Texas, at which one delegate said that what we needed was the verbs of learning, not just the nouns. I think he was saying the same thing that I am saying here: we need the activities, not just the information.
One of the reasons why we don’t have so much activity-based content is that learning activity software is expensive to produce.
When we look to data standards, that means we should not just be talking about file formats or insisting that all data is open. The ability to use proprietary data formats to protect your rights in innovative types of software encourages people to invest in the difficult and risky process of developing those new types of instructional software. And we really need that investment.
We should not think that open standards are incompatible with proprietary technologies. Quite the contrary. We live in a digital world in which we are used to plugging proprietary technologies together. We plug our iPods into our amplifiers and headphones, just as we have plugged our toasters into the electricity supply for fifty years.
In 2009, I did some work for Becta, conducting a survey of 18 leading publishers supplying learning content to UK schools. I asked them to prioritize a range of requirements for interoperability—and the results are shown on this graph.
At the top of their priorities came a group of requirements which, in the SCORM tradition, would be referred to as run-time data: reporting scores to a common mark-book, saving state, managing authorization (i.e. single sign-on), and returning creative product to repositories, e-portfolios and show-cases. All these are types of interoperability which allow individual suppliers to develop innovative, activity-driven software, unconstrained by any common file format—yet at the same time to plug-in to common services and exchange data.
At the bottom of their priorities came a group of requirements which could be characterized as common file formats: IMS QTI, ePub, structured content and Becta’s Common File Format for interactive whiteboards. All these common file formats focus on either expositive media or (in the case of QTI) encode activity in a way which limits further innovation.
Unlike the flat grass, which is how I pictured our current ecosystem, we need rich, activity-driven ecosystem, with many nested and interlinked infrastructures, and many different types of content.
This is a complex ecosystem which is going to be created by innovators. For both reasons it cannot be described by standards. What the standards community ought to be doing is trying to find the abstract principles which will allow such an ecosystem to grow. Fractal theory has shown that much of the complexity of nature is based on relatively simple mathematical formulas.We should be looking for the same kind of simple, enabling principles.
In order to find those principles, we need to have some idea of the range of content that will be contained within the ecosystem. Yes, it will contain those types of expositive media (text, video, HTML, graphics, audio) which is what we generally think of when we talk of “learning content” at the moment.
But It will also contain the activities which will use those media in educationally productive ways (creative tools, assessment, communication, games).
It will contain the services which will deliver those activities (apps, widgets, web-services, tools, hardware drivers).
It will contain the learning objectives which define the purposes for which those activities will be used in a formal, educational context (abilities, competencies, aptitudes, curricula).
It will contain the metadata which will describe not just the bibliographic information (such as rights and coverage), but also the technically-precise behaviors, and the subjective opinions of different user communities about how good something is and how it is best used.
It will contain different types of aggregation which will orchestrate the services, sequence the activities, and lay-out or organize the media resources.
It will contain the management data (what in SCORM was called runtime data), which will track activity and outcomes (grades, authentication, interactions data, reflections and comments, creative product).
And it will contain the information required to manage proprietary technology appropriately: file associations which link particular types of data with their corresponding players, viewers, plug-ins or editors.
These eight different types of content will all be defined by other people—not by the formal standards community. But I believe that there is a ninth type of content which does need to be defined by the standards community.
In order to explain what that ninth form of content should be, I want to run through some of the basic types of interaction which are going to occur in the ecosystem; an ecosystem which is populated by services (the verbs) and data objects (the nouns).
Both services and data objects will be described by metadata.
Services will need to discover each other (by reading each other’s authoritative metadata)…
…in order that they can establish communications and exchange data with each other.
Services will also need to discover the data objects…
…so that they can import them and manipulate them appropriately.
Data objects cannot actively discover each other but they can be aware of each other (that awareness given them at the time of authoring)…
…so that they can reference each other. We live in a world of linked data.
All of these interactions require that all participants can understand the metadata. In the past, we have achieved this understanding by creating standard data specifications. Anyone wanting to read the metadata just has to look up the meaning of particular terms in a standardized set of definitions.
Where one specification is not enough, specifications are grouped together into reference models, like SCORM, or families of specifications produced by a single organization.
The trouble with these standardized data models is that they are rigid and brittle. They are difficult to change and if you try, more often than not they break. As soon as a standard specification achieves traction in the market (which is what we are aiming for) then it becomes very difficult to introduce changes. What matters in a standard is not quality or fitness-for-purpose but currency. This is why it became very difficult to update SCORM 1.2 to SCORM 2004 or QTI 1.2 to QTI 2.0.
Another way of thinking about the standards impasse is in terms of the English aphorism about “putting a spanner in the wheel”. Wheels are meant to turn—this represents iterative innovation and development—but the spanner stops this happening.
If education technology is to be used to transform education, then it requires innovation.
And innovation requires interoperability because no-one can do it all; because innovation is often driven by small niche companies; and because at the same time, education is a continuous and interlinked process in which individual activities and services have to play their part in a single, integrated environment.
And interoperability requires some sort of common understanding, which has normally been encoded within a standard.
But as soon as you fix that understanding in a standard, further innovation is blocked.
The solution is not to standardize the data model but to create the standards (perhaps we could call them meta-standards) to allow for the flexible specification of data models. This will enable an iterative cycle of innovation, encompassing product development, implementation-led interoperability trials, and the flexible definition of temporary, extensible data models.
In terms of my earlier diagram, the star (representing the standard) and the blue books (representing the data definitions) are now separate. Metadata references the data definition and the data definition references the standard, which can be thought of as a type of schema language similar to XSD but more specialized.
These definitions are the ninth type of content which is required by our learning ecosystem.
You might object that you won’t get any interoperability if everyone ends up writing their own data definitions…
…but these data definitions will not exist in isolation. They will reference each other, nesting, extending and mashing each other up.
As for the extent of interoperability that will be achieved, it must be left to an open and competitive market to discover exactly what degree of interoperability is appropriate.
The circles in this slide represent different communities of practice or proprietary interests, and the overlap represents the degree to which they interoperate. There may be good reasons why the different communities should not converge entirely: different communities of practice normally have different requirements and they should not be forced into the same straight-jacket. Different organisations may wish to try out experimental approaches or protect their legitimate return on investment.
At the same time, these centrifugal forces are balanced by centripetal forces: the utility of software is increased if it can integrate as far as possible with other systems.
So long as the market is open, fair and competitive, it will determine where the equilibrium between the centrifugal and centripetal forces lies. It is not for the standards community to make this call: what we can do is to provide fluidity, to help the market find that equilibrium point itself.
Regulatory approaches are often ineffective. The first type of ineffective regulation is one that is ignored. King Canute was an Anglo-Danish King of the eleventh century who is said to have ordered the tide not to come in. Obviously, the tide ignored him.
Robby Robson, ex Chair of the IEEE Learning Technology Standards Committee jokes that standards must be a good thing because there are so many of them. Most of these so-called standards have not been implemented by anyone at all.
The French aviator, Antoine de St Exupéry, portrayed a king in his children’s story, The Little Prince. Every day, this king ordered the sun to rise and set. But he made sure to give these orders only “when the conditions were favourable”. In other words, he only gave orders for things to happen which were going to happen anyway.
Instead of a regulatory model which pretends to tell people what to do but rarely makes any actual difference, the standards community should be looking to play an enabling role, oiling the wheels that want to turn but can’t.
In summary, we do not need standardized data models and file formats, which specify functionality and restrict innovation—but the meta-standards of an extensible ecosystem, which are designed to support the innovation that education technology so badly needs.
You might ask where e-learning will be in ten years time. The traditional approach to standards development would come up with an answer, creating specifications to try and steer people to that destination.
But anyone who believes in the importance of innovation will say, to the contrary, that we hope that in ten years time that we will have ended up somewhere really surprising, that no-one could have predicted. If the standards community can have created the mechanisms which allows the innovators to take us to such a place, then it will have done something very valuable.
Very interesting article and point of view. Would like to know more about the research you did for BECTA. Can you contact me?
Hi Micha, Thanks for the comments. I have sent you a copy of the survey. You may also be interested in work that I am currently doing for ISO/IEC JTC1/SC36 on defining an e-Textbook. We shall shortly be reporting on this in a webinar – I will make sure you get an invitation. If anyone else is interested, do drop me a line at email@example.com. Crispin.