Learning content customisation and adaptation

Custom car

A copy of a comment regarding the difference between customisation and adaptation, and the importance of the latter to learning content that encapsulates pedagogy.

It is a central argument of this blog that the attempt to apply technology to the improvement of education has been held back by the lack of education-specific software. Such software will generally encapsulate pedagogy. An objection to this approach was recently raised by Peter Twining in a useful discussion on his blog, EdFutures. It is a little difficult to link directly to the part of the conversation where this occurs – the best way is probably to follow the link to the discussion page and then to search for “Re Technology Enhanced Learning”, which is the title of the thread in which this discussion occurs.

To paraphrase the general objection to software that encapsulates pedagogy, such software might be seen as a way of scripting lessons that dis-empower the teacher. At the top level, I would respond that many teachers have a pretty shaky understanding of pedagogy, so the ability to put pedagogically proven tools into their hands is a key way in which we will empower (not dis-empower) teachers (see my Education’s coming revolution). As for the nature of those tools, I certainly accept that the way in which software is used in the classroom needs to be flexible, allowing the teacher (the professional on the spot) to apply the software in the right way. This provides the background to my conversation with Peter Twining regarding the customisation or adaptation of education-specific software.

Peter’s argument is that, according to an OU project in the 1990s called SoURCE, in which he was involved, the pedagogy encapsulated in software often needed to be subverted by the teacher—and that this suggested that the encapsulation of pedagogy was something of a blind alley. I copy below my reply to Peter, followed by my conclusion.

Reply to Peter Twining

It is a little hard to respond in detail, (a) because of the scarcity of project outputs and (b) because of the amount of time it would take. However, working from the SoURCE overview at http://kn.open.ac.uk/public/getfile.cfm?documentfileid=2220, I would make the following comments.

  1. Customising content was the objective of the project—so it is not very surprising that it found that it could do what it set out to do. It does not sound as if the project made much attempt to assess the extent to which such customisation was found to be necessary or desirable by teachers and lecturers.
  2. The extent to which software needs to be customised will depend to some extent on the quality of the software, the validity of its original objective, and the availability of alternative software that targets the teacher’s preferred pedagogy. An assessment of the significance of the project outcomes will need to start with an assessment of the quality of the software being used and the validity of the pedagogic model that it encapsulates.
  3. That said, I think the principle of adaptablility is a very important one. Let me propose a difference between “customisation” and “adaptability” on the basis that the first incorporates a subversion of the original pedagogical intention of the software and the second does not, but represents the application of the software to a variety of different contexts. So your paper quoted above says:
Thus, for example, you can customise the Elicitation Engine by changing the artefacts that it is manipulating and/or by using it as a reflective tool for students, or as an assessment tool for staff to identify students’ misconceptions.

I haven’t worked out what the “Elicitation Engine is yet—but it seems clear to me that the two ways in which you are changing the application of the software represent my definition of “adaptability” and do not represent a subversive “customisation”. “Changing the artefacts that the software is manipulating” represents the application of the same encapsulated pedagogy to a different subject area—this strikes me as an essential feature of any pedagogy-encapsulating software. Second, the use of the Elicitation Engine as either a “reflective tool for students or as an assessment tool for staff” boils down to different ways of using the tool and its outcome data in a wider ecosystem. This too is an essential characteristic of a digital ecosystem built on open interoperability standards, that the student and teacher can play Lego with their software components, modelling different pedagogical processes at the macro scale by different combinations of pedagogy delivered by different software applications at the micro scale. Not all pedagogy is encapsulated in the atomic learning activity—the higher-level pedagogy is achieved by the combination of those atomic elements.

In short, neither of these examples seem to me to represent the customisation of encapsulated pedagogy in way that is subversive of the original intention of the software.

One further point. Customisation by tweaking program code or using software for a purpose that was not intended is likely to be difficult and to cause confusion amongst users—particularly when deployed in a class of 30 who are bound to find out any flaws in the software pretty fast. The example that I quote from your paper illustrates the two principles of *adaptability* which I think are essential:

  1. adaptability by parameterised launch (in this case, providing a different list of resources) with parameters being specified in user-friendly interfaces;
  2. adaptability by different selection and combination, with these “sequences” and other aggregations of content being created in easy to use, drag-and-drop authoring tools.

Neither of these principles undermine—but rather enhance—the value of software that encapsulates pedagogy.

Conclusion

Since its inception in the 1990s, the Shareable Content Object Reference Model (SCORM) attempted to promote the reusability of learning content (the foremost of a number of qualities that became known in the SCORM community as the “-ilities”. In this, SCORM failed.

In my view, the failure was due to two key problems:

  1. an inherent lack of robustness in the SCORM protocols (meaning that one person’s implementation of the specification did not always interoperate with someone else’s and content authored for use in one system could not be reused in another system);
  2. a lack of attention to another of the key “ilities”: adaptability. This could be achieved, as outlined in my reply to Peter, both:
  • by parameterising content launch (remember, by “content”, I mean activity software, or more specifically, a Learning Activity Platform);
  • by allowing the combination of atomic learning activities within sequences—SCORM attempted to deliver this in its series of 2004 editions through an IMS specification called Simple Sequencing, which never worked very well).

Seen against this history, Peter’s point could be articulated as follows: pedagogy-encapsulating software is hard to deploy unless these technical issues are sorted out in a way that allows flexible deployment. A key step in sorting out the pedagogical argument lies in the creation of robust interoperability standards that address the issues that SCORM failed to solve.

Advanced Distributed Learning (ADL, the stewards of SCORM) are promoting the Experience API (xAPI, also known as Tin Can) as the replacement for SCORM. I think the specification shows a lot of promise—but to be really effective, it will need to be embedded in a wider ecosystem of standards. xAPI addresses the reporting of learning activity outcomes but does not address either sequencing or parameterised launch, what I see as the two key aspects of content adaptability that Peter and I agree is needed.

Finally, the prospect of adaptable software that encapsulates pedagogy illustrates another distinction that came up in my extensive discussion with Peter Twining at EdFutures. Peter maintained that one reason why technology needed to be embedded across the curriculum was that technology was changing the essential nature of all subject disciplines. I disagreed. My argument has been that the essential nature of most subject disciplines (including Computing itself) is not changing, but the ways in which those disciplines are applied is changing. What therefore needs to change in the teaching of these different subjects is the contexualisation of age-old principles and not the age-old principles themselves. If you are going to teach vectors, do so in the context of the way astronauts float around in the International Space Station, rather than by the way that sailing ships move through a tidal flow. The ability to vary the contextualisation of learning objectives is an important aspect of pedagogy (I am currently rewriting my earlier post on pedagogy to cover this point more thoroughly). In the context of the current debate about the Computing Curriculum, this point underlines the importance of making the distinction between:

  • learning objectives (the end of education that is encoded in the curriculum);
  • pedagogical principle (the means of education, at an abstract level);
  • teaching practice (the application of pedagogical principle to a particular teaching environment).

5 thoughts on “Learning content customisation and adaptation

  1. I follow these discussion with interest. However, they can be rather abstract. For a debate about practice some examples are often useful. You talk here about software that encapsulates pedagogy. Can you give some (good) examples to illustrate what you mean. Thanks

    • Hi David,

      Thanks for the very fair comment. A friend in America sent me a similar challenge after I wrote “Managing progression in LAPland” and I did some work on a long and moderately complex use case for the blog, until life got in the way. I will try and finish that piece off over the next couple of weeks.

      In the meantime, you might look at pages 19 and the accompanying notes on page 20 of the paper at http://www.saltis.org/papers/silt.pdf that I produced several years ago when setting up my SALTIS industry group. This is a use case which is focused on the use of a timeline editor in History.

      Part of the problem with providing use cases is that I am talking about an approach to education technology here that does not yet exist. I hope that the scenario I describe is plausible – but yet it is inevitably somewhat speculative until it is actually put into practice.

      Let me know what you think.
      Crispin.

    • Another example of a digital pedagogy that could usefully be encapsulated within an education technology product is “peer mentoring”, described very persuasively by Professor Mazur at http://www.youtube.com/watch?feature=player_detailpage&v=y5qRyf34v3Q#t=1380s. This not only requires clicker technology and associated polling software; for the busy school teacher who may be less of a subject expert than Professor Mazur, it also requires a bank of suitable questions and perhaps some AI software that will advise on appropriate questions to ask particular classes in the context of particular learning objectives. This AI component would be able to profile the class’ current conceptions and misconceptions, and also track the success of different questions when previously asked of a range of similar classes. Crispin.

  2. Very intriguing, thanks for sharing. You mention that the “xAPI addresses the reporting of learning activity outcomes but does not address either sequencing or parameterised launch, what I see as the two key aspects of content adaptability.” Do you think that content adaptability/sequencing should be described/defined by a standard (like the xAPI)?

    The reason I ask is that the xAPI provides the means for “learning systems” and content to poll or query an LRS for historical data (this data is portable & contextual). This allows content to be delivered intelligently based on whatever rules or processes the content abides by in the learning & delivery system.

    Although, it is interesting that future work on the TLA (Training & Learning Architecture) by the ADL may include some related work: “Intelligent content brokering to meet the needs for individualized learning content and systems” – http://www.adlnet.gov/introducing-the-training-and-learning-architecture-tla

    Just some thoughts, interested to hear your take. Thank you.

    • Hi Ali,

      Thanks very much for the comment.

      Yes, I think there are places for a couple of standards here covering content launch (which would cover parameterisation) and sequencing. The latter is a topic we discussed in a group called LETSI (to which Mike Rustici of TinCan also belonged), and which I summarised in a video in 2009 at http://www.saltis.org/videos/letsi_sequencing.htm.

      But I think there is a prior step to each of these requirements – which is what in LETSI we came to call “Learning Activity Definition”. It is hard to sequence stuff unless you can launch it and it is hard to launch stuff unless you have a clear handle on what it is you are launching. A URL is not enough, because you also need a bunch of metadata to describe the behaviour of whatever it is you are launching. If you build assumptions about the behaviour of what you are launchig into the standard (as e.g. was done with SCORM), you end up with a very inflexible system.

      Nor is it enough just to define a single spec for that metadata because as people develop innovative new forms of learning activity software, they will develop new behaviours – and new metadata to describe those behaviours.

      So what you need is an abstract and extensible shell. In LETSI, we came to call the shell the “learning activity definition” and the extensions mechanism “Data Model Definition” – though that name could perhaps be improved. What was missing from all these discussions was a standards process that encouraged participation by real implementers. So prior to all the good conceptual and technical stuff are strong standards processes, linked to real business models.

      Learning Activity Definition is also important for learning analytics. I follow the progress of xAPI/TinCan with interest. At the moment, it seems to me that it is at the stage of storing records in the LRS. So far so good. But the real challenge will be to do useful things with that data and for that, you need to understand what the data mean and what their significance is. I suspect that it will be difficult to do that unless you know where it came from (i.e. what learning activity sent it).

      The word “sequencing” suggests a somewhat fixed scripted approach to content aggregation. The idea of “content brokering” suggests something rather more on the fly, driven perhaps by AI and analytics. I suspect that there is a place for both, with short scripts as well as atomic activities being selected by the brokering systems. An analogy of asking “how large should a sequencing script be?” might be to ask “how big should a molecule be?” Almost any size you like, I think the answer to that one is.

      Hope this helps. You might like to look at my previous posts:
      * https://edtechnow.net/2013/03/27/lapland/ on learning activity platforms;
      * https://edtechnow.net/2012/11/30/six-technical-standards-to-build-an-ed-tech-market/ on what I think are the six key priorities for the standards community (I see TinCan as providing a solution to item 2);
      * https://edtechnow.net/2012/04/03/what-do-we-mean-by-content/ on why “content” is often an unhelpful term.

      Best, Crispin.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s