A white paper proposing key ed-tech priorities for the world wide web
I co-wrote the following paper with Pierre Danet from Hachette Livre for the task force being run by W3C, the consortium responsible for the world wide web. The paper outlines what we see as the key priorities for the world wide web in the face of an emerging market for digital ed-tech. The basic premises of the paper were accepted in a call last Friday and over the next two weeks we will be working on a paper to describe in greater detail the specifics of the next steps that we believe need to be taken. This will be intended to form the prospectus for a W3C Community Group, which anyone who is interested in taking this work forwards is invited to join. Please email me at crispin.weston@saltis.org and I will forward your details to Pierre, who is leading the current scoping exercise. The original paper is currently on the W3C wiki page for this group.
The current state of the ed-tech market
There has been considerable recent interest in the potential of digital technology to transform education. While the OECD’s PISA reports have focused attention on the underperformance of many Western education systems, Massive Open Online Course (MOOC) platforms in the US have held out the prospect of scaling what have previously been regarded as elite courses, to make them accessible to motivated learners across the world.
There are now good reasons to believe that genuinely useful ed-tech is about to emerge. Mobile technology has recently made 1:1 device ratios attainable in the classroom, the cloud has made it easy to acquire and manage new applications, and touch screens have enabled the development of software that is intuitive to use. Recent improvements in HTML allow for the development of richly interactive online learning experiences. The power of modern technology to process big data, already being demonstrated in business environments, has the potential to help manage complex educational processes as well.
Against this background, W3C has convened a working group to consider how it can support these widely anticipated developments—because there is no reliable evidence that useful ed-tech has yet emerged. A 2012 government-funded meta-analysis by the UK’s University of Durham concludes that “the correlational and experimental evidence does not offer a convincing case for the general impact of digital technology on learning outcomes” (page 3).
We suggest that there are three key barriers to the emergence of a dynamic ed-tech market:
- The lack of interoperability of systems and content that would allow different sorts of instructional software to work together in an integrated environment.
- The lack of discoverability of innovative new products.
- The institutional conservatism of many state-run education systems, which are often resistant to innovation and uncomfortable with the use of data as a management tool.
We believe that W3C is ideally placed to help to remove the first and second of these barriers. It has both the standing and the technical expertise required to create authoritative standards. As the steward of the world wide web, it is closely identified with the benefits of connectivity. The sense of moral purpose often associated with the world wide web gives it the stature required to lead an important initiative for the improvement of global education. At the same time, it will need to work with governments that are prepared to address the institutional conservatism to be found in their own formal education services.
The requirement for ed-tech
Education technology is not just about publishing. It is a common aphorism amongst teachers that “we learn by doing”. From this perspective, the role of the teacher is to design instructional activities that are appropriate to a given set of educational objectives, to motivate students to undertake those activities, to provide feedback on their performance, and to manage their progression from one activity to the next.
Digital technology has proved well adapted to managing complex processes and transactions in other businesses. Analytics software has shown that it can make sense of complex datasets. Digital games routinely support the sorts of rich interactivity that education requires at the instructional level. Yet none of these paradigms have been widely implemented in education, which still depends on the uncertain craft of the individual teacher, in a profession which in many subjects faces a chronic undersupply of well-qualified staff.
Most education technology to date has concentrated on the dissemination of information and on assuring the memorisation of such information through simple multiple choice tests. There has been recent interest in the potential of social networking technology, though it is not clear that this will make a significant contribution in schools (K-12), where (in contrast to professional development) students have limited expertise in the subject being studied.
Little has been done to develop education-specific software which:
- at the instructional level supports purposeful interactivity (both on and off the computer) in a manner analogous to digital games;
- at the learning management level, controls assignment, the sequencing of activities, reporting, analytics and accreditation.
Good data interoperability, especially between the learning management systems and the instructional content, is an essential prerequisite for the successful development of either class of software.
Such interoperability standards do not currently exist. If they are not developed soon, it is likely that control of an emerging ed-tech market will be captured by proprietary platforms.
SCORM: a lesson from the past
The Shareable Content Object Reference Model (SCORM) was a collection of informal standards published by Advanced Distributed Learning (ADL), a unit of the US Department of Defence. Two significant versions were produced: SCORM 1.2 in 2001 and SCORM 2004. The standard included the following components.
- Content packaging, a specification developed by IMS Global Learning Consortium (IMS GLC), comprising an aggregation of zipped content. The compressed folder included an XML manifest file, which provided a “substrate” for other types of metadata (such as LOM and Simple Sequencing).
- Learning Object Metadata (LOM). A metadata format standardised as IEEE 1484.12, containing classification information deemed relevant to education.
- Computer Managed Instruction (CMI): a data model inherited from the Aviation Industry Computer Based Training Committee (AICC), specifying the runtime data that a Shareable Content Object (SCO) might want to read and write to and from the LMS by which it had been launched. This included contextual information read from the LMS at launch, and learning outcome data reported back to the LMS at termination.
- The SCORM API, a JavaScript API providing the means by which runtime data was passed.
- Technical metadata, which in the case of SCORM included a single field in the content packaging manifest called ScormType. This could equal either “sco” (meaning the webpage used the CMI runtime) or “asset” (meaning it did not).
- Simple Sequencing, an IMS GLC specification which allowed the SCOs contained in the package to be adaptively sequenced by the LMS.
SCORM achieved significant global traction in the education world between 2001 and about 2006. After this, interest waned for the following reasons.
- Legal disputes between ADL and IMS GLC over intellectual property rights restricted the development of the Content Packaging specification.
- Developments in browser security rendered the JavaScript API obsolete.
- Simple Sequencing failed to provide robust, plug-and-play interoperability.
- The CMI runtime provided no support for multi-player interactions.
- The CMI runtime data model was not extensible and did not, for example, support the transfer of creative product (such as essays or solutions to problem).
- There was no attempt to represent learning objectives (important both for adaptive sequencing and the reporting of learning outcomes) in a manner that were comprehensible outside the context of a particular course.
- SCORM was frequently specified in government procurements in circumstances which encouraged suppliers to produce the simplest possible implementations, generally avoiding use of the runtime. This undermined the reputation of the SCORM brand.
Recognising that it was not well placed to steward a de facto global ed-tech standard, the US DoD attempted in 2008 to transfer ownership of SCORM to a new non-profit organisation called LETSI. A conference was held to which over 100 white papers were submitted with proposals for improvements to the specifications. However, the transfer of ownership was prevented by legal challenges by IMS GLC, which owned IP rights in Content Packaging and Simple Sequencing.
The only substantive development to emerge from LETSI’s work was TinCan, also known as the Expererience API (xAPI). Based on a LETSI prototype, this project was subsequently funded by ADL. TinCan updated the SCORM API to provide a modern transport protocol based on Web Services. It includes only a rudimentary data model and does not cover launch, metadata description or the sequencing of content.
Recommendations for action by W3C
W3C is well placed to create the “next-generation SCORM” which the LETSI initiative failed to provide in 2008. Such an initiative would address education’s requirement for purposeful activity integrated with analytics and process management.
Such an initiative would fall within the scope of the world wide web, so long as this is understood as a vehicle for providing data connectivity and not merely a medium for the distribution of content. Such a service for connectivity must not seek to restrict the type of applications it supports. It must be accessible to desktop and mobile apps as well as to web apps. This should not be seen as problematic because the key to preventing the capture of an emergent ed-tech market by proprietary interests is open runtime data, not necessarily open file formats.
We recommend that such a W3C initiative should adopt the transport mechanism provided by TinCan, adding the following components to complete a viable standards platform for education.
- A simple but consistent method of publishing metadata for learning content, supporting both discoverability and the declaration of runtime functionality.
- A data model description language which will enable supplier communities to specify new data structures in a consistent and extensible manner, allowing for the development of new metadata and runtime data models in a timescale that mirrors product innovation.
- A new specification for the adaptive sequencing of learning content.
- A specification for the machine-readable description of learning objectives and curricula.
- A machine-readable data handling description language, allowing for the specification of procedures for data protection and privacy. Such a specification will allow governments and other institutions to produce consent forms, regulatory standards and legislative instruments in ways that software, sold on an international market, can easily understand and support. The precise definition of such procedures would also support an informed public debate on these issues. Without greater clarity in this area, it is likely that the sharing of educational data will run into serious political obstacles.
Taken together, these specifications would form a coherent and viable platform to ensure the interoperability of educational data. Subsequent work could deliver convergence with open content formats, such as EDUPUB or IMS GLC’s Question and Test Interoperability (QTI), which can be supported on a case-by-case basis.
When ed-tech achieves significant impact on learning outcomes, the need to ensure equitable access to those technologies will become increasingly urgent. Viable technical solutions to the problem of accessibility will depend on the ability of management systems to pass student preference data to instructional software. Such a solution will be enabled by the educational data interoperability platform outlined above.
Alongside such technical work, W3C will need to seek partnerships with interested Departments of Education, which will have an important role in encouraging demand within their various jurisdictions for innovative and effective education technology that supports open data standards.
Crispin Weston & Pierre Danet
17 February 2015
An interesting document but your broadly negative view of the practice of education does show through even this somewhat technical context. It is not necessary to explain and justify the development of standards for interoperability (a sensible project) by sniping at an entire profession.
So, to improve the tone of your paper you could delete the phrase “… education, which still depends on the uncertain craft of the individual teacher …” because (i) that’s just a prejudice (based on positional arguments which can’t end) and (ii) you may be subtly implying that it is possible to provide a technical/software fix that does not suffer from either uncertain craft or uncertain pedagogy which of course is an unfounded position.
Why should your proposal help fix the problem of ‘uncertain’ pedagogy?
You could delete this phrase without weakening the proposal overall but at the same time not alienate some of your audience through the repetition of this sort of Ken Robinson-like negativity about teachers and teaching.
Second, the statement that “The power of modern technology to process big data, already being demonstrated in business environments, has the potential to help manage complex educational processes as well” is just plain false and possibly complacent. Evidence (among other): the financial crash of 2008 which was caused in major part by foolish, stupid, and ignorant manipulations of business ‘big data’.
I don’t think education has much to learn from the mistakes made in the processing of big data by business except perhaps what it should avoid, both ethically and legally. There is yet much work to be done on this and so far we have no clear models of what data should be/could be amassed through online learning (particularly at school level), how it can be effectively processed, or how it is owned, managed and curated by appropriate authorities. I appreciate this paper is a step towards that although there is no clear strategy set forth here about how that process will be put into action, only a wish-list.
However, we need to be careful to avoid the assumption that merely defining standards cures the perceived ills of pedagogy that you discuss elsewhere in your blog. Like the statement about the craft of teaching this reference to big data could also be removed without weakening your paper, and this too might also avoid alienating some of your audience. Education as a social project needs neither the false promises of technocratic certainty nor the woolly, opportunistic data-processing principles of ‘business’.
Hello David, Thank you for the comment.
On your first point – there are two uncertainties: one of pedagogy and one of the skill of the teacher. Referring to the uncertain skill of the teacher is not to snipe at a whole profession: on the contrary, it is to recognise both the very great skill of some teachers and the very great importance of their skill. See my argument against MOOCs, eg at Online Educa Berlin: MOOCs will not work because scaling to massive assumes that you get rid of the human teacher – and that, I argue, will never work in formal education (http://www.online-educa.com/OEB_Newsportal/the-oeb-debate-2013/).
The argument I make is different: it is about the difficulty of staffing a service that is immensely labour-intensive and knowledge-intensive and needs to be scaled to the whole population. Do you not think that there is a problem with the introduction of the Computing curriculum when there are not enough teachers with experience of coding to teach it? And as knowledge becomes ever more specialised and technical and scientific innovation moves ever faster, this sort of skill shortage is only going to get worse. And don’t you think it there is a problem that Dylan Wiliam reports research which shows that the amount of learning achieved by students depends on the quality of the teacher, which typically varies by a factor of 4? Would you find it acceptable if the death rates of one NHS surgeon were 4 times higher than another’s? That is the sort of uncertainty that I think needs to be recognised and addressed.
On your second point, regarding big data, the idea that the banking crash was due to data processing is a new one on me. I have heard many explanations, from short-termist cultures fueled by excessive bonuses, through the sub-prime mortgages fueled by interest rates kept artificially low by governments, to lax regulation, to “too big to fail” banks, through to out-and-out corruption. But now you say it was caused by data processing – really? I have, it is true, heard that there might be problems caused by automatic trading systems causing extra volatility in the markets – but that is not about feedback loops in high-velocity, automated systems – nothing to do with drawing conclusions from the analysis of big data, which is what fuels medical research and what allows Google Translate to work and what powers most businesses marketing efforts – however worthy the end goal, the methods seem to work pretty well.
I agree that education is a social project but I don’t think that means that it should not be efficiently run.
As you know, I am very critical of Sir Ken Robinson: see my post at https://edtechnow.net/2012/01/20/sir-ken-robinson/ and my comments about the importance of teachers, above.
I think we need to distinguish between regulatory or best practice standards (which does not concern this initiative) from interoperability standards – which will not of themselves cure the ills of poor pedagogy. They will, however, help stimulate the sort of ed-tech that will provide a medium for expressing pedagogies, in a way that will allow the good and the bad to be sorted out – see my “Unlocking pedagogical innovation” at https://edtechnow.net/2014/01/23/gutenberg/.
The value of software-based pedagogy lies in (a) the fact that software can automatically collect learning outcome data to provide empirical evidence of what works and what doesn’t, (b) that it is more easily replicable than personal practice so, whether it is good or bad, it is at least more consistent (which also helps the sorting process), and (c) the fact that digital resources can be produced centrally and so more time and resources can be spent on making them really good. None of which suggests that you do not also have to add the human leadership of a good teacher into the mix as well.
As for the pragmatic argument, in which you advise me against alienating my audience – I am not, thankfully, a politician and do not need to worry about that sort of thing. I just speak the truth as I see it. I don’t believe I do alienate teachers (see two recent appreciative commenters on my piece on Tim Oates (https://edtechnow.net/2014/11/12/oates/). But the kinds of people that the W3C work really needs to attract are not teachers but software developers and publishers, who are the people that are going to implement these standards.
I think we disagree on a fair amount David – but I am glad that we agree at least on the importance of interoperability (and, along the way, that we agree on Sir Ken) – and I am always grateful for your comments as an opportunity to explain my position, I hope more clearly.
Crispin.
OK fair enough. I’m not really arguing, but we have along way to go to really sort out a, b, and c in your third from end paragraph. There is a lot there that troubles me but I have don’t time at the moment to address those troubles; one for example is the very great ambiguity of the concept of a “digital resource”.
Therein lies the joust!
I realise though that none of this affects directly the discussion about the definition of interoperability standards and that project needs ot proceed, although I would be concerned about the extent to which those standards might lock in certain choices about what format or structure a digital resources might take.
Some other time I’m afraid!
Incidentally we can certainly argue about whether or not the the financial crash was in some degree caused by the growth of large, complex and automated software systems, but again some other time! If financial markets aren’t run by judgements based on data analysis then how are they run?
Cheers.
Thank you David,
I am all for a bit of jousting and wish people would do it more. Though it might be relevant to your interpretation of my response, above, that I was told that I was in a bad mood last night – something I hotly deny, of course.
Having considered your comments further, I might clarify the text about “the uncertain craft of the teacher” to insert something like, “often working in isolated classrooms and dependent on their own skills and resources”. The point was, to my mind, well made by L C Taylor, Director of Resources at the Nuffield Institute in 1971, who wrote: “Good or bad, each teacher copes as best he can. It is hard to think of any other trade in which such isolation persists”. (see my “Education’s coming revolution” at https://edtechnow.net/2013/01/26/industrial-revolution/).
I completely agree with you that we need to be very careful to avoid a reductive definition of a learning resource. I wrote on this very point in my “What do we mean by content?” at https://edtechnow.net/2012/04/03/what-do-we-mean-by-content/). We have tended to define learning resources reductively to apply to expositive text, graphics and video, rather than more interactive software. Apologies, by the way, for all these self-references – I don’t expect you to follow them all but they do try and tie together my argument that I have spread over three years of blogging.
Wait and see what we produce for the more precise scoping document for the W3C work, which I will post in a fortnight. What I will propose is a metadata stub paired with an data model extension language. In short, a digital resource can be anything – just say what it is and how it behaves and declare the terms in which you are going to describe it in a consistent, machine-readable and as far as possible convergent manner. This is intended to square the circle between innovation and interoperability.
If you like what you see, you might like to join the group.
Hope this helps. Crispin.
I completely agree )))
ouf !
Frenchie Troller !!
Oui, chef !
Interesting article. It’s great that you’re recommending building on Tin Can with specifications covering complimentary areas. Worth noting though that many of these areas already have small or large amounts of work completed or underway. It’d be worth reviewing these and considering joining those initiatives before starting anything new.
CMI5 (https://github.com/AICC/CMI-5_Spec_Current) in particular is worth looking at.
Hi Andrew, Thanks for your comment – its good to hear from you. I completely agree with the emphasis you lay on trying not to duplicate work.
I was involved in the early stages of CMI-5, though I note that the level of engagement in that project was limited and AICC has now closed. Which doesn’t mean that the specification was not worthwhile of course, and we will certainly look at it again. Thank you for the current link.
I am also in touch with other US communities – including those at ADL (in particular Jason Haag, who has updated me on their recent work) and the IEEE LTSC, where there is the possibility of a new initiative around competencies (what I call “learning objectives and curricula” in my paper above). I have also been closely involved in the LETSI initiative since 2008, chairing a couple of working groups there and tracking Mike Rusticci’s original work on Runtime Web Services, on which the later TinCan work was based. I hope that many from these communities will join any new work in W3C, ensuring good levels of co-ordination.
It may be that work in W3C may have greater resonance in Europe than the US institutions and if we can get complementary work streams going, it might well also be welcome to ADL. Let us see how the pieces fit together.
Perhaps you’d like to come along yourself?
Yes, possibly 🙂
CMI5 has survived the demise of AICC and is now being looked after by ADL (at least nominally, I understand the same actual people, including ADL staff, former AICC members and others, continue to work on the specification). It can’t be denied that this has progressed slower that I’d hoped, but after two years rumour has it that a CMI5 beta release is coming “soon”.
I’ll look at it carefully before writing my next, more detailed scoping paper. Thanks.
How would you summarize CMI5 ? Tnx
Pierre
Hi Pierre,
I’m a big believer in not re-inventing the wheel, so rather than tell you how *I* would summarize CMI5, here’s how Art Werkenthin would: http://risc-inc.com/blog/the-next-generation-scorm-cmi-5/
Very good post on cmi 5, tnx for this link !
Pierre, Here’s my take on CMI5.
The CMI system, run by the Aircraft Industry CBT Committee (AICC) is the grand-daddy of them all. It was born some time in the early 1990s, when it used text files for communication. Then in the late 90s, it got upgraded to use HTTP and was renamed HACP, while the CMI data model was adopted by ADL who put it into SCORM. SCORM provided quite a lot of extra functionality (Simple Sequencing etc), much of which only half-worked, and so people who used HACP say that it was simpler and more reliable: a bit like one of those trendy bikes with no gears. But all the dollar was on SCORM.
There was perhaps a little resentment inside AICC that their data model had been “borrowed” and maybe hi-jacked a little. Then LETSI came along in 2008 and pointed out that there was quite a lot of dissatisfaction with the CMI data model and everyone started compiling their wish-lists of what to do with SCORM.
So in 2011/12 (?), AICC decided to retake the initiative and bring out a new version of their system. The idea was to strip it down and make it really simple, leaving space for people to extend the data model if they wanted more complexity. It would also use their own course-structure metadata (i.e. what I would refer to as LAD + sequencing). Although I have never used it, I think it has the advantage of being simple and not depending on IMS Content Packaging. It uses the “Assignable Unit” rather than the “Shareable Content Object”.
I attended some of the early calls – but it struck me that there were two problems with the CMI5 approach.
1. There wasn’t an extension mechanism. It is very easy to say “you can extend the data model if you want to” but not very useful if any extension is then not understandable by anyone else and interoperability is lost. Art Werkenthin’s article makes exactly this criticism of xAPI (but then, later in the article, lists it as an advantage). This is the problem that my proposal (to be described in my second paper – out soon!) is designed to solve.
2. I am not sure that people want a simpler data model. I think they want to support multi-player interactions and the return of creative product (e.g. essays etc) and that’s just for starters. I think the KISS acronym (Keep it Simple Stupid) is often misapplied. My mobile phone isn’t simple and that seems to be pretty good.
The third problem is that there wasn’t a lot of engagement with the CMI-5 group. That has been a problem for everyone, not just AICC. But AICC recently ended up closing and, although the CMI-5 work continues, the spec is still in draft and I have not read it yet.
Because of the success of xAPI, CMI-5 changed to using xAPI as its transport mechanism, rather than the AICC’s old HTTP methods. So my guess is that CMI-5 will turn out to be a profile of xAPI that is useful for running legacy CMI content with xAPI,using the old AICC Assignable Unit metadata. I know some people who have been involved in the work who say it has some good stuff in it and I look forward to reading it when it is published. But I’d be surprised if it is the the way to go for people getting involved in this sort of technology for the first time.
If I got any of that wrong, I am sure someone will come in and correct me. Crispin.
Crispin,
I believe you have mischaracterized my criticism of xAPI flexibility. My point was that without CMI-5, there is no way to give “credit” in an LMS based on xAPI statements. What verb means “credit”? Completed? Experienced? Passed? CMI-5 defines specific verbs and extensions that must be used for interoperability of xAPI and an LMS. On the other hand, we don’t want to eliminate all flexibility, so other verbs and extensions may be used, they will simply be ignored for CMI-5 compliance. In other words, when you make a CMI-5 statement it must comply with the CMI-5 spec, but this does not prevent you from making other xAPI statements. I believe we have gotten the balance about right.
In addition, there is no reason another, more restrictive specification could not be developed on top of CMI-5, in the same way CMI-5 has been developed on top of xAPI.
Hi Art,
Thanks for the comment. I certainly did not mean to cause any offense, though I am a bit perplexed by your suggestion that I was misrepresenting you, because the “credit” issue is not mentioned in your article at all, and the passages that I refer to, such as the section under “xAPI is flexible…too flexible?” is clearly a general discussion of the advantages and drawbacks of extensibility as a principle. And because I am basically agreeing with you that extensibility is a double-edged sword.
As I understand it, you are arguing that extensibility is good if it is managed by a community of practice that defines the extensions in a way which everybody can understand, but not if you just invent your own extensions that no-one else knows anything about. I take your point. But the fact that AICC, a well respected organisation to which I give the credit (above) for being “the grand-daddy of them all”, but has nevertheless worked for 3 or 4 years to produce a profile and has had to close its doors before that process is finished, just goes to illustrate that this “community of practice” way of doing things is not great. What I think is even more problematic is that the community of practice will always represent the way that most of its members already do things and so it will tend to take a traditional, and not an innovative, line.
On the particular issue of “credit”, I would say that the ADL-recommended verb “passed” was the equivalent of “been given credit”. You may well want to say something slightly different or give more information, like whether the student passed “with merit” or “with distinction”, at which point this becomes a sort of grade and raises questions about the difference between “credit” and “score”. But I have a couple of reservations about the requirement.
First, as I argue in my earlier comment, I think the way that xAPI has been sold as the standard that “Keeps It Simple Stupid” by reducing everything to easy-to-understand verbs is a red herring. If big data is going to work in education, then learning analytics software needs LOTS of data. When we look under the hood, I want it complicated, not simple. So I don’t get excited about TinCan verbs. I would stick with “did” as my verb and then put *all* my substantive information into the [result] object.
Second, I don’t think the learning activity should be making the call on whether the student passed / got credit. One of the failings of SCORM (and I suspect AICC CMI) from the point of view of K-12 was the failure to achieve reusability of learning activities. In my view, a key reason for this was that it did not adequately support adaptability: no adaptability = no reusability. And the notion that the learning activity knows whether 7/10 is sufficient to pass (or have credit awarded) is an example of this. Credit against what? Against a course. But what if I want to use the same learning activity in a different course, or set a different pass mark, or look at a different aspect of the student’s performance in that same activity? The problem, from my perspective, is that SCORM and CMI both assume that the learning activity already knows about the context / course in which it is going to be deployed.
So does xAPI, because it has the learning activity reporting to the LMS about the context in which the activity was performed, when in a formal education environment, that is really information that the LMS should be passing to the learning activity. It ought to be the LMS or sequencing manager that is managing the course and its learning objectives and setting the parameters for the activity, not the Activity Provider.
What I am trying to argue here is *not* that the way you want to do this kind of thing is wrong (reusability may not be an issue for you) but just that it isn’t going to suit everyone. And that is why, I suggest, we need an extensions mechanism that allows for people to define the extensions they are going to use in a way that supports robust interoperability but which does not depend on a 4 year project to build consensus across a whole community.
I will be publishing the second paper in this series tomorrow – and it will be giving more details about how I propose that the W3C effort should do just this. It will be followed by a W3C call at 10 a.m. Eastern time on Friday – email me on crispin.weston@saltis.org if you would like to join. If you have particular requirements for the data models that you do not think are met by the current standards, then I hope that you will be interested in what I am proposing – and may be able to help by giving us some use cases on the basis of which my proposal for an XDMDL spec (eXtensible Data Modelling Declaration Language) could be prototyped.
And thanks again for coming on the blog to comment. Crispin.
Crispin, this is indeed an interesting post.
I’m very much in favor, of course, of the work you’d like to do on top of xAPI. There is an unbelievable amount of adoption happening below the surface at the moment. Where last year I was evangelizing in the hopes that adoption would hit… I’m not evangelizing much now… we’re past innovators and into early adopters, which is fun times, indeed.
You may not be aware of what’s happening with the spec, but we are forming a consortia to work on standardizing the spec and to promote the activities around the spec that support interoperability in multiple verticals. I’m not intending that we compete with you — rather, we should talk about how we can help you in W3C.
It’s early days yet, but we’re starting here: http://connectionsforum.com/
Also worth noting, we’re having a first xAPI event in Orlando at University of Central Florida on March 24, in case you happen upon the States at that time. We’d be happy to have you.
Hi Aaron, many thanks for your support and it’s very good to hear about the traction you are achieving with xAPI in the US. I am very keen to develop awareness in Europe and the UK about the value of what has been done within the SCORM tradition. Especially within formal education, I would say there was very little awareness at the moment. If we can develop some momentum here (and I think W3C might offer a more familiar platform than ADL or IEEE on which to engage) then I certainly hope that we can contribute more than we have in the past to the development of common standards. As you say, it is not a matter of competition and I would be keen to cooperate very closely with our US friends.
Thank you for your invitation to your conference in Orlando. I would love to come but I think I need to get a slightly better feel for what sort of buy-in we are generating here before I could justify the cost. I will stay in touch over that one. In the meantime, thanks again for your support.
No worries about coming. I think you’d do well to talk with the folks at HT2, LEO, and Brightwave as well as City & Guilds about what adoption in the EU is looking like. At the January Learning Technologies conference, a few of the vendors held a fringe event with about 50 people showing up to talk xAPI. As an off-the-main-path event, getting 50ppl in the UK talking xAPI in the same room is a pretty big deal.
As I wrote before, let me know how we can help. Don’t be afraid to ask, nor to be specific in such a request. 🙂
Thank you Aaron. Will do. Great to hear that you had such a good showing at Learning Technologies.
Aaron,
Can you say more about this connectionsforum ? What are your objectives ? Which verticals are you aiming at ?
Tnx
Pierre
Certainly, Pierre!
xAPI is one data format that works across multiple systems, software and devices, across vendors, organizations and industries. It’s not completely ubiquitous yet, so Connections Forum aims to help people and organizations across different industries work with xAPI.
Some goals are core across every vertical. All organizations adopting xAPI want solutions that “just work.” Individuals using xAPI want best practices. Organizations need talented designers, developers and scientists who know how to work with xAPI.
So, one goal for Connections Forum is to illustrate what’s possible with xAPI. Education, certification of practices and products and evolving standards and technologies are such outputs.
To accomplish that feat, those of us in the xAPI community realize we must tap into something much bigger than anything we can do on our own.
Even though Adult Learning & Development is where xAPI has its strongest adoption, Connections Forum will gather different industries and practices. Each has its vendors, publishers, scientists, developers, designers and end-users. Each must focus on the ways to express, secure, exchange, and use data. We will collaborate on the evolution of xAPI, harnessing its utility to help organizations and individuals make better decisions today. Connections Forum will be the “go-to” resource linking xAPI to market needs around the world.
So far we are seeing two verticals adopting xAPI beyond the Adult Learning & Development space: Education (which can be broken down into P20 and Higher Education) and Medical Sciences (which can be broken down into Medical Education and Healthcare). Medbiquitous is an organization that’s beginning to codify the use of xAPI in Medical Education. There is an IEEE standards group focusing on Personal Medical Devices with interest in xAPI as an interoperable data format across devices.
One may argue that Medical Education, P-20 and Higher Education aren’t so different from Adult Learning in that they’re still about learning, but they are different enough that I’d see us having close relationships with those organizations who want to use xAPI, advise on what others are doing and promote interoperability and in-kind support so that not only do we have standards — we have a common spirit towards making sure people and organizations really can “plug and play” while fueling (and scaling) innovation across industries.
Does this help?
Many tnx for your effort. It’s still quite conceptual for me ! who are xAPI competitors ? What type of technology language is it ?
Pierre
Reblogged this on The Echo Chamber.
Great article, totally agree with the suggestion of “adopting the transport mechanism provided by TinCan”, and the other suggestions are very thoughtful. There are already some standard initiatives addressing education content metadata => LRMI, learning objective/competency => Achievement Standards Network, Open Badges, “Granular Identifiers and Metadata for the Common Core State Standards” or GIM-CCSS, and Linked Education(Linked Data), LinkedUp Project…, also Open Learning Analytics and Open Learner Model.
Jessie
Pingback: A response to a W3C priorities blog | Christian Bokhove
Good proposal, my comment(s) too long for here so blogged at http://bokhove.net/2015/02/28/a-response-to-a-w3c-priorities-blog/
Many thanks Christopher – I have replied on your blog at http://bokhove.net/2015/02/28/a-response-to-a-w3c-priorities-blog/#comment-4759.