In the beginning was the conversation

conversationThe most fundamental of all pedagogical patterns is the conversation—and it is this paradigm that needs to inform the implementation of education technology.

Grab a cup of coffee and get comfortable! At 12,000 words this is the longest of my posts so far. But right now, it seems as if it is my most important, so I think it will be worth the read.

In 2012, I have addressed what I see as deficiencies in many of the current ed-tech theories and processes. Last month, in Education’s coming revolution, I made the general argument that education technology provided the only plausible, long-term solution to what are endemic problems in our schools, introducing a systematic approach to education that contrasted with the model of teacher-as-craftsman.

This post describes what I think those systems will look like. They will be grounded in reputable educational theory, and in particular on what is the essential design paradigm for all learning: the conversation.

Conversation as the embodiment of reason

Having been brought up in the Church of England, I was for a long time both moved and confused by the ringing opening of St John’s Gospel:

In the beginning was the word. And the word was. And the word was with God.

“Word” often conjures up the idea of “command”. We may indicate our willingness to receive commands by inviting someone else just  to “say the word”. This interpretation makes God the Creator sound rather like the Centurion who in Matthew Chapter 8 tells Jesus:

I myself am a man … with soldiers under me. I tell this one, ‘Go,’ and he goes; and that one, ‘Come,’ and he comes. I say to my servant, ‘Do this,’ and he does it.

This vision of an authoritarian God doubtless suited the desire of the Tudor church to impose state authority on an unruly people. But it never struck me as an appealing view of our supposed relationship with God—nor do I now believe that it was the intention of St John to suggest that it was. It was only later when I read Plato that I understood the significance of the use of “word”. It is a translation of the Greek “logos”, which is more frequently explained as meaning “a rational account of something”. John was not calling us to obey some great Centurion in the sky: he was saying that reason was absolute. You might illustrate what St John was claiming by a thought experiment. At the point of the Big Bang, when the whole of the cosmos was a singularity and there was not in existence more than one of anything, would it have still been true to say that “2 + 2 = 4”? St John says yes. He claims that existence is not haphazard and that neither humans nor anyone else makes up the rules as we go along. Show me the modern Physics Department that would not agree.

Education is also about reason—and is subject to the same questions as were raised by St John. Is reason absolute and, if so, how and with what degree of confidence can a sense of reason be established and transmitted?

Desiderus Erasmus, the sixteenth century Dutch humanist theologian, played a part in the early stages of the European Reformation in challenging the authoritarianism of the traditional church. When in 1516 he came to harmonise the Greek and Latin translations of the New Testament, he rendered the first line of St John’s Gospel to read not “In the beginning was the word” but “In the beginning was the conversation”. Reason may be absolute but our perception of it is not. Authority is fallible and the only way in which we can attain reason is through dialogue, debate and the interpretation of evidence. And the notion of conversation is crucial not only to Enlightenment ideas about reason but also to learning.

Conversation as a means of understanding truth

John Stuart Mill was an English utilitarian philosopher, early feminist and inheritor of the liberal tradition that ran through the Reformation, the Enlightenment and English nineteenth century thought. In On Liberty[1], Mill argued that the transmission of factual knowledge as certain truth failed to develop the student’s ability to think for himself.

He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. The rational position for him would be suspension of judgment, and unless he contents himself with that, he is either led by authority, or adopts, like the generality of the world, the side to which he feels most inclination. Nor is it enough that he should hear the arguments of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. This is not the way to do justice to the arguments, or bring them into real contact with his own mind. He must be able to hear them from persons who actually believe them; who defend them in earnest, and do their very utmost for them.

Mill goes on to admit that there may be logistical difficulties in providing a succession of authentic sparring partners against whom the student can argue a variety of differing views. But the lack of debate in schools was due to more than logistics: it was also due to a mind-set among teachers that tended to represent beliefs as certain truths, which robbed the students of the habit of thinking for themselves. The solution, according to Mill, was that the teacher should constantly look for opportunities to act the role of devil’s advocate.

The loss of so important an aid to the intelligent and living apprehension of a truth, as is afforded by the necessity of explaining it to, or defending it against, opponents, though not sufficient to outweigh, is no trifling drawback from, the benefit of its universal recognition. Where this advantage can no longer be had, I confess I should like to see the teachers of mankind endeavouring to provide a substitute for it; some contrivance for making the difficulties of the question as present to the learner’s consciousness, as if they were pressed upon him by a dissentient champion, eager for his conversion.

It is not just that challenge and debate are nice things to have if you want your children to grow up with liberal values. Challenge and debate are essential to learning of any sort. It was precisely this attempt to bring to life the controversy that is inseparable from knowledge, and to induce students to process in their own minds the evidence for any particular position, that lay behind the Nuffield and SMP teaching programmes, discussed in my earlier post, Learning’s coming revolution.

R.D Lang, the psychologist, makes the point more generally about the most fundamental forms of learning, the development of consciousness itself:

All “identities” require an other: some other in and through a relationships with whom self-identity is actualised[2].

The nature of the “other” and the nature of the interactions may both differ: practical, emotional or intellectual—but the principle remains the same: that learning does not occur by absorption, but through interaction, either with animate or inanimate “others”.

Jean Piaget’s model of the development of the child depended on the interaction between the child’s cognitive model and the outside world. Learning occurred only when the child’s model did not fit with the feedback from these interactions, leading to alterations in the model that might either be evolutionary (as when it is possible to assimilate a new perception that is basically consistent) or revolutionary (as when it is necessary to accommodate a new perception that is inconsistent with the existing model).

Seymour Papert

Seymour Papert worked with Piaget in Geneva in the early 1960s and then went on to become an advocate of the use of computer microworlds to provide learning environments that encourage children to take an exploratory approach to learning. The most famous of these was his LOGO programming language for floor-turtles. In his book Mindstorms[3], Papert observed that there were two aspects to the Piagetian model. The first is about the importance of the learner engaging in a constant interaction with his or her environment. The second is the idea that children are programmed to achieve certain types of learning at certain stages of their development. There has been extensive criticism of the second part of his theory, which Papert views as having essentially negative connotations.

The Piaget of the stage theory is essentially conservative, almost reactionary, in emphasizing what children cannot do[4].

The conversational aspect of Piaget’s work was used by Papert to develop his theory of “constructionalism”. This is not to be confused with constructivism, an essentially relativist theory which suggests that we construct our own knowledge. Papert was interested in discovering knowledge through the external construction of shareable digital artefacts, which occurred in parallel to the internal construction of a students’ own “intellectual structures”[5] (valid or invalid). The student worked in microworlds: artificial simulations or creative environments that illustrated the reality being studied as the actions of a programmable floor turtle can illustrate some of the basic rules of geometry. Instead of being told what works and what doesn’t work, the environment would allow children to find these things out for themselves. The act of discovery developed understanding in a way that the passive reception of knowledge cannot achieve. If, for example, you wanted to learn about Newtonian physics, the best way would be:

to work with the Newtonian laws of motion, to use them in a personal and playful fashion. But this is not so simple. One cannot do anything with Newton’s laws unless one has some way to grab hold of them and some familiar material to which they can be applied…[6]
When a car skids on an icy road it becomes a Newtonian object: it will, only too well, continue in its state of motion without outside help. But the driver is not in a state of mind to benefit from the learning experience. In the absence of direct and physical experiences of Newtonian motion, the schools are forced to give the student indirect and highly mathematical experiences of Newtonian objects[7].

Papert’s theory of microworlds urged the creation of simulated environments in which children could conduct the same sort of interactive, trial-and-error investigations that had enabled them, as infants, to learn so efficiently about their physical environments. In the case of Newtonian motion, a good place to start might be the 1970s Atari computer games, Asteroids, Gravitar and Lunar Lander, adapted perhaps to support complex objectives and an experimental environment.

A basic conversational learning cycle

Applied in this sense, I am giving a broad interpretation to the idea of a “conversation” that includes, not just verbal communication around a café table in fin de siècle Paris but any kind of interaction, including those with non-human environments. The essential characteristic of “conversation” understood in this broad sense is an iterative feedback loop. The conversationalist says something (or the actor performs an action) and R D Lang’s “other” responds. The conversation only becomes interesting from a learning point of view when the response is in some way unexpected, suggesting that some modification in behaviour or word-view might be required. Depending on the context, the mismatch between action and response might be labelled “disequilibrium”, “variation”, “disagreement” or “failure”.

Figure 1: the basic interaction with the physical environment

It is the mismatch between expected and actual response that provides the traction required to drive learning forwards. The student takes account of the response, modifies his behaviour and acts again, initiating another iteration of the feedback loop.

Figure 2: response to variation

We have all been in discussions where at least one party (generally the other guy) is either not listening or not modifying their actions to accommodate what they hear. In this case, the iterative feedback loop just spins round in the same conceptual space. The conversation “goes round in circles” with no progress being made. But if at least one of the conversationalists is listening and modifying their actions depending on the feedback received from the other, then each iteration of the cycle moves forward into new territory, and there is the possibility of progress and learning.

Figue 3: progress through iterative conversation

Offering a simplified overview as it does, the spiral above illustrates two different processes:

  • variation (between expected and actual reactions);
  • progress towards a learning objective.

If the efficiency of the learning process were 100%, then these two processes might be seen as equivalent: the experience of falling off a bicycle would immediately show the learner rider how not to fall off a bicycle. As the learning process is rarely as efficient as this, it cannot be expected that the student will always achieve progress that is equivalent to the variation between the expected and actual feedback experienced. Nor will the extent of the variation always be equivalent to a lack of proficiency: a discussion might lead to a disagreement with a peer who has an inferior understanding to the main actor. Variation (or failure or disagreement) is a necessary but not sufficient precondition for learning to occur:

  • the variation must be of the right, helpful kind;
  • the significance of the variation has to be interpreted through a process of reflection on the part of the student.

Figure 4: interaction moderated by reflection

It is through this internal cycle of reflection that cognitive development occurs. Again, I use the term “reflection” in a broad sense that does not necessarily imply a conscious process. Even though the training of “muscle memory” or simple rote learning may in some circumstances require such a conscious effort, in other circumstances this learning may be acquired almost automatically through repeated practice. Even advanced conceptual realignments seem often to occur through a delayed, sub-conscious reaction, perhaps during sleep or over the long summer break, in a process that seems to have more in common with the slow maturation of a good claret than in response to an immediate conscious imperative. Nevertheless, all these processes—the unconscious micro-adjustments of muscle memory; the conscious, introspective pondering of a problem on the way home in the school bus; or the slow but fundamental realignment of a person’s world-view; can all come under the classification of internal reflection. And in all three cases, they need first to be initiated by the interactive conversation with R.D.Lang’s external “other”.

Having made a distinction between a lesson taught and a lesson learnt, this added complexity will break the neat conversational spiral to take account of the fact that the student’s reaction to adverse feedback will be unpredictable. Even though our actions may not achieve the desired outcome, we will frequently proceed to perform the same action in the same way—or maybe to perform a new action that is even less successful than the first.

Figure 5: distinction between variation and progress

Failure may not only be difficult to interpret intellectually: it also faces us with an emotional challenge. It may provoke us to a loss of confidence, to aggression or even to despair. An intolerance of failure or an excessively combative approach to one’s interlocutors, in which each party seeks to defeat their opponent with clever debating points and refuses to admit the flaws in their own argument, are serious impediments to learning. This point is often made when people are advised to make failure their friend. Budding entrepreneurs are advised to:

Fail early, fail often, fail fast.

“Fail fast” means that you should recognise failure quickly, minimising any damage that you might suffer in the process; “fail often” means that you should not be discouraged or delay to try again, fail again and learn again. The speed of iteration is an important characteristic of the learning conversation.

The debt you owe to those who may take the trouble to prove you wrong in argument is also pointed out by those who value debate. In the opening section of The Republic, Socrates pleads with his hot-headed interlocutor, Thrasymachus, to do him the favour of proving him wrong:

Prithee, friend, do not keep your knowledge to yourself; we are a large party; and any benefit which you confer upon us will be amply rewarded. For my own part I openly declare that I am not convinced…[but] perhaps we may be wrong; if so, you in your wisdom should convince us that we are mistaken[8].

Richard Dawkins makes a similar point about the need to embrace the opportunity to be proved wrong with an anecdote from his time as an undergraduate:

My belief in evolution is not fundamentalism and it is not faith, because I know what it would take to change my mind, and I would gladly do so if the necessary evidence were forthcoming.
It does happen. I have previously told the story of a respected elder statesman of the Zoology Department at Oxford when I was an undergraduate. For years he had passionately believed, and taught, that the Golgi Apparatus (a microscopic feature of the interior of cells) was not real: an artefact, an illusion. Every Monday afternoon it was the custom of the whole department to listen to a research talk by a visiting lecturer. One Monday, the visitor was an American cell biologist who presented completely convincing evidence that the Golgi Apparatus was real. At the end of the lecture, the old man strode to the front of the hall, shook the American by the hand and said—with passion—“My dear fellow, I wish to thank you. I have been wrong these fifteen years”. We clapped our hands red…[and] the memory of the incident … still brings a lump to my throat[9].

Neither Socrates nor Dawkins are shrinking violets and they are not advocating excessive humility. It is just as bad to be gullible—to believe anything that you are told out of misplaced deference or a lack of confidence in your own ability to think for yourself—as it is to be arrogant and to fail to recognise the possibility that you might be wrong. As the learning occurs in the variation between action and response, it is just as important that the conversationalist fights his corner as it is that he listens and responds to the “other”. The virtuous middle way is a difficult one to achieve—and as far as I can see, very few people manage it, particularly in their interactions with other people. I think it is something similar to the virtue of a good sportsman who plays hard and plays to win, not out of a desire to glorify themselves but out of a desire to glorify the game—and congratulates his opponent warmly on his contribution to the same end. If these sound like an old-fashioned values, then that perception might explain why so many political and academic debates today appear to be so sterile, plagued by self-regarding evangelists making unsubstantiated claims to expertise; distorted by the need to generate good PR and attract funding; constrained by the totems and taboos of fashionable orthodoxy.

When conversing with an environment rather than a human interlocutor, the variation between action and feedback is manifested not as disagreement but as failure. Papert argued that a key benefit of microworlds was in fostering a constructive attitude to failure:

Many children are held back in their learning because they have a model of learning in which you have either “got it” or “got it wrong”. But when you learn to program a computer you almost never get it right the first time. Learning to be a master programmer is learning to become highly skilled at isolating and correcting “bugs”, the parts that keep the program from working. The question to ask about the program is not whether it is right or wrong, but if it is fixable[10].

The enduring need for teachers

Despite the seductive appeal of Mindstorms, Papert’s microworlds are not a panacea. While the lessons of Piaget’s concrete operations are best learnt by interactions with a concrete environment (a child learns to avoid hot things by burning himself), the lessons of Piaget’s “formal operations” stage—i.e. abstract academic learning—are much more difficult to learn through such interactions. Papert’s theory of microworlds proposes that instructional designers can create artificial environments, specifically designed to provide concrete representations that enable the exploration of abstract principles. This is not always easy and progress on creating such microworlds has been slow. Professor Laurillard laments the fact that the opportunity to develop interactive learning environments has been “tragically underexploited”[11]. We may reasonably hope that the liberalisation of an education technology market, previously dominated by the prescriptive procurements of Becta[12], will help foster such innovation in the future. Nevertheless, it may not always be possible to create an experimental microworld to match every advanced conceptual domain. The more the student progresses towards abstract learning, the more it is likely that the conversation will occur with human teachers.

Figure 6: conventional conversation or teacher tutorial

To be fair to Papert, he does not see microworlds as replacing but rather as facilitating more conventional conversations with human teachers. Learning is limited not only by the lack of access to the right experimental environment, but also by the lack of the right sort of language with which to articulate and discuss abstract ideas.

I take from Jean Piaget a model of children as builders of their own intellectual structures…
All builders need materials to build with. Where I am at variance with Piaget is in the role I attribute to the surrounding cultures as a source of these materials…
Where Piaget would explain the slower development of a particular concept by its greater complexity or formality, I see the critical factor as the relative poverty of the culture in those materials that would make the concept simple and concrete[13].

One of the most important forms of material required to construct new intellectual structures is language. Microworlds create ways of visualising abstract problems; out of these visualisations spring concepts, concepts beget language, and language enables further conversations (this time between humans) about how to define and solve problems. Papert illustrates this point with a story.

The instructor and a child were on the floor watching a Turtle drawing what was meant to be a letter R, but the sloping stroke was misplaced. Where as the bug? As they puzzled together the child had a revelation: “Do you mean”, he said, “that you really don’t know how to fix it?”…
New situations that neither teacher nor learner can has seen before come up frequently and so the teacher does not have to pretend not to know. Sharing the problem and the experience of solving it allows a child to learn from and adult not “by doing what teacher says” but “by doing what teacher does”[14].

This model of the conversation between an individual student and an individual teacher represents an ideal of the Socratic dialectic. The nineteenth century US President, James Garfield, described his vision of an ideal learning environment as:

a log hut, with only a simple bench, Mark Hopkins on one end and I on the other[15].

Initiating conversational tutorials

President Garfield’s vision merely transfers the ideal of the Socratic dialogue out of the agora of ancient Athens and places into a context that would have been more familiar to his electorate. But however attractive this ideal (at least for abstract learning), it is not a practical model for delivering universal education. The problem of scalability has to be addressed. Although the doctrine of the “flipped classroom” (as discussed in my previous post Education’s coming revolution) provides a model for blending teacher-led tuition with digital learning, it does not have much to say about the quality of the learning resources themselves, which are assumed to consist mainly of expositive video. In this scenario, the main learning still occurs in the conversation with the teacher and all the limitations of teacher conversations still apply.

Figure 7: model of flipped classroom based on expositive resources

What is needed are pedagogical designs which show ways of combining teacher-led conversations with other forms of conversation, involving digital or concrete environments, in ways that leverage the benefits of both, while increasing the productivity of the scarce expert teacher.

This is why Papert’s model could be regarded as “flipped classroom plus”. The activity generated by the microworld not only keeps the student productively engaged (the “babysitting” role that increases the scalability of the model)—it also throws up problems that allows the student to initiate conversational interactions with the teacher. In a flipped classroom that depends on the use during non-contact time of expositive resources rather than interactive environments, the student will more often be forced to wait passively for the teacher to introduce activities and initiate conversational interactions, resulting in a less dynamic relationship.

Professor Laurillard characterised the distinction between conversations with human and environmental “others” as conversations in which the actor receives either “intrinsic” or “extrinsic” feedback. Falling off a bicycle represents intrinsic feedback because the actor is coming up directly against the reality of what it takes to ride a bicycle. When the teacher grades an essay, then the student is coming up against the extrinsic feedback represented by the teacher’s judgement.

When the conversation occurs in the context of a digital activity, this distinction may become blurred. A multiple choice quiz will encode the judgements of the author of the quiz in a way that seems to parallel the extrinsic judgement of the graded essay. A flight simulator will appear to offer intrinsic feedback in the same way that the student experiences when flying a real airplane—but in reality the simulation encodes the judgements and expertise of the author in just the same way as occurs in the quiz. The main limitation to the exploitation of Papert’s theory of microworlds has not lain in any intrinsic shortcomings of the theory, but in the failure of software developers, acting as proxy teachers, to create the right microworld “content”[16].

While we normally use the term “simulation” to refer to digital environments that provide an illusion of 3D worlds or that mimic the multiple variables of a complex situation, all digital activities could be viewed in some respect as simulations. A multiple choice quiz simulates a Q&A session with a live questioner.

B. F. Skinner and programmed learning

Papert’s brow would doubtless pucker at the thought of turtle graphics programming environment being equated to a multiple choice quiz: from his perspective, these lie at opposite ends of the pedagogical spectrum. B. F. Skinner, the behaviourist psychologist, was a leading proponent of machine-learning. Based on his research with animals (notably rats in mazes) in which a series of positive and negative stimuli were associated with learning objectives, and by focusing exclusively on the observation of external behaviour and avoiding any speculation on the internal cognitive state of the subject (be he rat or human), Skinner developed an approach to teaching that has been regarded by many as mechanical. This five minute, 1954 YouTube video on machine learning is worth watching.

In many ways, Papert and Skinner lie at polar opposites of the pedagogical debate. As Papert makes the distinction:

The phrase “computer-aided instruction” means making the computer teach the child. One might say the computer is being used to program the child. In my vision, the child programs the computer…[17]

There are obvious limitations to the primitive teaching machines proposed by Skinner. They are based on relatively low-level, fill-in-the-blank drills that would be unlikely to help the student flex his creative muscles or develop forms of higher-level understanding and skills. It is still worth making a few points in favour of Skinner’s machine learning:

  • even repetitive drills might be appropriate when addressing simple knowledge (such as language vocabulary or times tables) which often constitutes important prerequisites for more advanced understanding and skills;
  • the very simple and monotonous activity delivered by Skinner’s 1950s machines could be made incomparably more sophisticated and entertaining by modern computers;
  • some features of the approach still appear to be important (the careful sequencing of prerequisites, the ability to progress at an individual pace, the freedom from public humiliation when you get something wrong).

The most important point, however, is that although Skinner’s teaching machines seem to take such a different view of learning to Papert, they conform to a very similar kind of conversational cycle. Unlike the Computer Based Training systems of the 1990s, they do not use multiple choice to test the student’s ability to memorise large quantities of information, but present that information through activity that follows the same cycle of performance-feedback-reflection as does a learner operating in an exploratory environment.

I am not seeking to hold the ring between Skinner’s instructionalism and Papert’s constructionalism: I suspect that the ideal course is likely to include some combination of the two. My purpose is to point out that whether we are talking about a repetitive drill, a creative exercise in a microworld, a peer-to-peer discussion or a teacher-led tutorial, the child is involved in an iterative conversation, continually adjusting his performance to accommodate different types of feedback.

The difference between “formative” assessment, designed principally to teach, and “diagnostic” or “summative” assessment, designed principally to assess, lies in whether or not the assessment provides feedback and an opportunity to try again. Both are characteristics of the conversation—the fundamental pedagogical design pattern.

The inscrutability of learning environments

While the digital simulation is likely to be able to offer a richer set of kinaesthetic paradigms and visual stimuli than the human teacher, it will not generally be as adaptable as the human teacher in responding to the actions of the student or understanding the student’s objectives.

A major part of Joseph Conrad’s fascination for the sea lay in its indifference. In his short story Youth, the narrator tells of his do-or-die effort to sail a leaky ship from London to Bangkok. The whole adventure occurs between a sea and a sky that vary between serene beauty and ferocious violence, but which at all times are supremely indifferent to the desperate struggles of the human protagonists. In the second half of the story, the ship develops a slow fire in the cargo hold:

The sky was a miracle of purity, a miracle of azure. The sea was polished, was blue, was pellucid, was sparkling like a precious stone, extending on all sides, all round to the horizon—as if the whole terrestrial globe had been one jewel, one colossal sapphire, a single gem fashioned into a planet. And on the lustre of the great calm waters the Judea glided imperceptibly, enveloped in languid and unclean vapours, in a lazy cloud that drifted to leeward, light and slow: a pestiferous cloud defiling the splendour of sea and sky[18].

Environments (both concrete and digital) may inspire, may challenge, may provide ideal playgrounds to enable learning—but when things go wrong or when difficulties become insurmountable, the environment does not lift a finger to help.

The inscrutability of learning environments may in some circumstances be seen as an advantage. Kurt Hahn was a German-Jewish schoolteacher who in 1933 fled to Britain, where he founded and led Gordonstoun School (later attended by Charles, Prince of Wales), the Outward Bound movement and the Duke of Edinburgh scheme. Hahn believed in the importance of mountain expeditions as a means of self-discovery. The conversation with the mountain is not complicated by personal likes and dislikes, rivalry or shame: failure is easier to handle when no-one is watching. But the main benefit of the mountain was that it was unforgiving, forcing students to dig deep within their own resources and discover their true potential.

If students are to be given an opportunity to exercise their intellectual muscles, the teacher must not help out too soon. In a 2007 talk to the Association for Learning Technology this is what Professor Dylan Wiliam of the Institute of Education suggested happens too often[19]:

Somebody once joked that schools are places where kids go to watch teachers work. And certainly with the intensification of test results, I see teachers working very hard—if the teachers are going home more tired than the kids at the end of the day, the wrong people are doing the work[20].

Professor Laurillard also points to the danger of a “coach” intervening too often in practice environments and giving students too much help:

There are two problems with this. One is pedagogical: the coach must be careful to “fade” their support and ensure the student does the work of interpreting and analysing how to improve their performance, e.g. by asking them to look at the difference between their output and a model answer.

Just as the role of a fitness trainer is to ensure that it is their client that does the jogging and lifts the weights, so the teacher undermines the benefits of the learning environment if they fail to allow the student to do the intellectual heavy-lifting themselves. At the same time, the indifference of the inanimate environment (physical or digital) may sometimes overwhelm the student, crushing and not stimulating aspiration—and the teacher should certainly have intervened before that point is reached.

The second of Professor Laurillard’s two problems is logistical:

in the context of a mass education system, few students can hope to receive the detailed and timely feedback they need while working on tasks[21].

This is the argument that I addressed in my first post. While inanimate environments may be provided that are always there, teachers aren’t. The greater their expertise, the more scarce and difficult the human teacher may be to access. It would be interesting to track a student through a school day and record the time that he or she spends in substantive, two-way conversation about their learning with teachers. I would guess that for most students, the average time engaged in such conversations would be measured in no more than a few seconds per day.

It is true that in theory, conversations with teachers can be formalised and conducted through the submission and marking of written work—but in this case, the quality of the marking that is performed as a repetitive chore may be unreliable, the speed at which teacher feedback is delivered is generally too slow to be of much use, and it is rare that the exchange continues beyond a single iteration, missing the opportunity or incentive for the student to consider the teacher’s response, moderate his or her performance, and try again.

Papert’s sense that a bug was just something to be fixed can be applied just as much to an essay as it can to a computer program—but without iterative feedback, opportunities to fix problems are lost. The trouble occurs when the pressure of the curriculum meets the scarcity of expert human teachers. As already discussed, this represents the key, endemic problem with universal education in the modern world.

Composite conversations

Instead of formalising systems that slow down the conversational feedback loop, we need systems that speed it up, supplementing the scarce resource represented by the teacher with appropriate intermediaries. The diagrams below suggests three sorts of intermediate interlocutor:

Figure 8: composite conversation with a physical intermediary

  • a concrete practice environment (represented by the bicycle);

Figure 9: composite conversation with a peer as intermediary

  • another student, who can work with the first student in a collaborative or competitive group or as a peer mentor;

Figure 10: composite conversation with a digital activity as intermediary

  • a digital learning activity (such as a game, simulation, or creative tool).

These are three examples of a kind of “composite conversation” in which the teacher retains ultimate control and direction but in which the student is engaged in an intermediate dialogue, in which new ideas and new types of behaviour can be tried out. The intermediate interlocutor provides responses that are rapid, iterative, non-judgmental, always available and cheap, while the second interlocutor (the teacher) makes less frequent interventions that provide motivation and remediation; that prompts more efficient reflection; that validates progress; that responds to student actions that are outside the capacity of the “first responder” to field; that helps interpret practical successes, articulating them as abstract principle; and (as described by Papert’s anecdote about the problem with the buggy “R”) that provides examples of action and attitude for the student to imitate.

The combination of the two interlocutors involved in a composite activity is not just about achieving logistical efficiencies: it is also about achieving productive synergies between the two. The intermediate interlocutor (peer, environmental or digital) allows the student to practice his or her skills and test new approaches to a problem with a degree of freedom from teacher supervision; while the teacher can still intervene to provides direction where it is required.

The need for data

The vital link between the two interlocutors in these types of “composite conversation” lies in the ability of the teacher to monitor what is happening in the intermediate interactions. This monitoring could occur in real time: the teacher could watch a group of students practicing how to ride their bicycles, offering encouragement, criticism, or individual tuition when he or she spots someone with a problem. Similarly, the teacher could circulate around a classroom in which pairs of students were practicing conversation in a foreign language, making similar interventions. In the first case, a single teacher can monitor a large number of students visually, while in the second case (where the teacher has to pay attention to the words being spoken) only one or two conversations can be monitored at once. As with Papert’s microworlds, as soon as the subject matter moves from concrete to formal/abstract operations, teaching becomes more dependent on direct teacher involvement and is less easy to scale.

This is why the use of digital media to report digital outcomes becomes critical to the ability of human teachers to monitor performance.

Figure 11: using aggregated data to monitor the intermediary conversation

In this diagram, the student interacts iteratively with an intermediate interlocutor, represented by a shooting target. Between each interaction, the student reflects privately on how to improve. The teacher, unable to monitor the student at all times, looks to some sort of aggregated data to enable effective monitoring. This provides the basis for teacher feedback on the student’s general performance in the activity, which in turn seeds a further, more formal cycle of reflection by the student about how the learning outcomes relate to the ultimate learning objective, a process of reflection in which the teacher (or other actors such as peers and parents) may participate. The benefits of this type of deeper, more formal reflection, have been well rehearsed by the proponents of e-portfolios as a pedagogical tool.

I should stress that I use the term “data” in its broadest sense to include, not only performance metrics (such as scores) but also digital artefacts that the student has created and qualitative comments. It is likely that aggregated data records will include all of these types of data, allowing the monitoring teacher to analyse, corroborate and drill down as required.

The intelligent LMS

The keystone of this process lies in the ability of different software to:

  • aggregate data produced by individual interactions (represented in the diagram by individual shots on the target) into a form that provides useful insight into the general outcomes of the student’s activity;
  • sequence different types of learning activity.

The first of these functions belongs to the software responsible for managing the learning activity (I shall call this software type the Learning Activity Platform (LAP). The second function belongs to what might be called either a Learning Management System (LMS) or an Intelligent Tutoring System (ITS). Objections can be raised to both acronyms.

My objections to “LMS” are that:

  • it is an acronym that has been widely used in the past as a synonym for Virtual Learning Environment (VLE): software that typically does very much less that I shall describe in this article;
  • like VLEs, LMSs have tended to develop into unwieldy one-stop-learning-shops, when in practice the management functions that they provide are more likely to be provided by a collection of specialist, interconnected components.

My objections to “ITS” are that:

  • to my mind the acronym implies a complete automation of learning processes, when what I am proposing should be seen as a system that provides assistance to the human tutor (as power steering provides assistance to a driver);
  • second, the function of “tutor” encapsulates both the management of learning processes and the delivery of specific learning activities—rather than a set of components that focus exclusively on learning management.

Both acronyms tend to suggest monolithic applications, rather than open ecosystems that encourage the technical innovation that education so desperately needs. On the whole, I feel that the problems with the term ITS are intrinsic, while the problems with LMS have arisen by association. On these grounds, I shall continue to use the term “LMS”, albeit with the caveat that the LMS as I am describing is:

  • likely to offer a different set of functionalities to those offered by current LMSs;
  • likely to comprise multiple applications, working in an integrated environment.

I hesitate to start inventing new terms in an area which is already over-burdened with overlapping and poorly defined acronyms—but if it is desired to draw the distinction between the LMS as described in this post, and the type of LMS that is commonly found in the market, one might borrow from “ITS” and talk of the “intelligent LMS”, to distinguish it from the “legacy LMS”.

The key point about the intelligent LMS is that it provides a generic infrastructure to many different kinds of learning activity. The LMS is not a learning delivery platform: as soon as it starts to deliver learning activities directly, the customer is getting locked into what is almost certainly going to be an inferior one-stop-shop. You should no more buy an LMS that encapsulates different learning activities than you would buy a hi-fi system that comes with its own integral, music collection. These are infrastructures, not libraries.

As many different pieces of software will be required to deliver specialised types of digital activity, so it is these Learning Activity Platforms (LAPs) that will be responsible for aggregating intra-activity data: one LAP might be a timeline editor for use in history and this will produce digital timelines; another LAP will be a Newtonian physics simulator and will produce data that reports the achievements and infers the likely stage of cognitive development of a student who has used it to solve a particular problem; a third may track a multi-student conversation, tracking the contributions made by different members of the group to the final conclusion. All of the data formats used to summarise these different types of learning outcome will be specific to the particular kind of activity delivered by each of the different LAPs.

Three-dimensional data

The role of the LMS is to handle aggregation at a more generic, inter-activity level. It will need to do this in multiple dimensions, as illustrated by the following succession of “Rubric’s Cubes”.

The basic cube shows a three-dimensional organisational space, defined by students, activities, and learning objectives. I shall discuss learning objectives in more detail in a future post: for the present, I shall only say that a learning objective is generally a complex, composite thing. You might consider “Performing long division” to be a learning objective; but this could be broken down into subordinate competencies, such as “Performing addition”, “Performing subtraction”, “Working with base 10 numbers”, and so on. The competency “Performing long division” is also likely to be sub-divided into different levels of proficiency, depending on the speed of the student’s working, its accuracy, or the complexity of the numbers being operated on. So on my Rubic’s Cube, each learning objective might be understood as the organisational principle behind a particular course or unit of instruction—but not to represent a single, atomic competency.

The following figures propose some provisional terms to describe various facets of this three-dimensional space.

Figure 13: the learning experience

A single cell in the cube, at the intersection between a single student, a single activity and a single learning objective, I propose to call a “learning experience”.

Figure 14: the learning episode

I distinguish “learning experience” from “learning episode” (a term used in the London Knowledge Lab’s recent report, Decoding Learning) on the basis that a “learning episode” can involve a number of students working in a collaborative or competitive group, while a “learning experience” involves a single learner.

Figure 15: the learning pathway

A row of cells in which a single student targets a single set of learning objectives by tackling a range of different activities I propose to call a learning pathway—this being the equivalent of the spiral diagram used earlier. It could also be taken to represent the data that is shown in a single row in a traditional markbook.

Figure 16: the task transcript

A column of cells in which a single activity targeted on a single objective is tackled by a number of different students could be called a task transcript—representing the data that would be shown in a single column in a traditional markbook. While the definition of a single learning activity might prove useful in relation to many different curriculum objectives, I use the term “task” for a learning activity that is assigned in the context of a particular learning objective.

Figure 17: the course transcript

A slice of the cube representing the intersection of multiple activities with multiple students with a single set of learning objectives could be seen as a course transcript—representing the information that would be found on a single page of a traditional markbook.

Figure 18: the student transcript

A horizontal slice of the cube representing the intersection of multiple activities and multiple sets of learning objectives with a single student could be called a student transcript, in which is recorded all the learning records of that student.

Figure 19 content paradata

A vertical slice representing all the usage data about a particular activity, which may have been performed by multiple students in pursuit of multiple different sets of objectives, might be called content paradata. Paradata is a term that has become popular over the last couple of years, particularly in the US, to describe metadata that has been generated in the course of use, as opposed to metadata that was created at the same time that the content object is authored. Content paradata may be of use to the author in monitoring usage, allowing optimisation of the Learning Activity Platform (LAP) or Learning Activity Definition (LAD) (terms which I shall explore in more detail in a future post).

The purpose of exploring the three dimensions of this Rubric’s Cube is to make the point that the data represented by each learning experience, which itself represents the aggregated data originally derived from numerous student interactions, can itself be aggregated in each of the cube’s three dimensions.

Figure 20: the aggregation of data in three dimensions

Data can be aggregated:

  • against individual students in order to profile proficiency and preferences;
  • against individual activities to profile the efficacy of learning content;
  • against individual sets of learning objectives to profile the efficacy of course pedagogy.

These three different dimensions of data aggregation, all of which occur above the level of the learning activity, all have a role to play in informing a student’s progression from one learning activity to another. So far, this article has focused on the conversation within the context of a single learning activity. But the iterative cycle of action and reaction that constitutes a learner’s pathway towards a learning objective, illustrated previously by a spiral of actions and reactions, is in practice likely to continue through a succession of different learning activities.

Figure 21: progressing through multiple activities

Managing progression

A key function of the learning manager (teacher and/or LMS) is to control the progression of the student from activity to activity. Indeed, the sequencing of learning activities is one of the most fundamental functions of the teacher using experiential learning pedagogies.

There are three fundamental considerations to be taken into account when deciding “what to do next”, corresponding to the three dimensions of my Rubric’s Cube:

  • the internal structure of the knowledge to be attained;
  • the prior cognitive “state” of the student;
  • the effectiveness of different types of learning activity.

Managing progression by analysing the learning objective

I stated in my introduction to the Rubric’s Cube diagrams that learning objectives tended to come in complex families. In order to teach long division, the Maths teacher would want to make sure that the student had previously mastered addition and subtraction (in that order).

It has often been said that the best way to make sure you understand something is to teach it. The act of teaching or explaining forces you to analyse carefully the sequence of prerequisites that need to be mastered before a particular concept can be understood. This process helps clarify in your own mind the structure of the knowledge being explained.

A priori analysis of the structure of a learning objective may enhance the teacher’s understanding but is not necessarily reliable. It represents only a supposition about the best way to present a series of interrelated concepts. It is a thesis that cannot be regarded as secure until it has been shown by empirical evidence to work in practice.

The ability to aggregate data in the yellow dimension on my Rubric’s Cube (above) will provide the evidence to show how different sequences of learning activities compare, on average, in improving the proficiency of a cohort of students.

Managing progression by analysing the prior proficiency of the student

The second criterion for managing progression references the student’s conceptual framework, preferences, and proficiencies. With regard to proficiency, the teacher’s objective should be to choose the right learning activity to ensure that the student is operating within his “zone of proximal development”. This is a bit like ensuring that the carrot is being dangled in exactly the right place, 12 inches in front of the donkey’s nose. Too far forwards and the student will not be able to figure out how to master the challenge they are being given; too far back and they will not be presented with any challenge and no learning can be expected to occur.

If the instructional processes are 100% efficient and the instructional designer’s analysis of how the learning objectives are structured is a good one, then it might be expected that a well-designed sequence of activities will always work. Each hoop that the student is required to jump through will be at the right height, just because it is just a little higher than the last hoop. This is was the theory of the B F Skinner’s teaching machines:

Each student follows a carefully constructed programme leading from the initial stage where he is wholly unfamiliar with the subject to a final stage where he is competent. He does this by taking a large number of very small steps, arranged in a coherent order. Each step is so small that he is almost certain to take it correctly.

While such carefully devised teaching programmes may minimise the opportunity for failure, in practice, we may still reserve some scepticism about the ability to construct such efficient programmes, particularly when the nature of the learning objective requires larger or more skills-based leaps (such as riding a bicycle). In practice, learning processes are generally messy and students should not be expected to learn what the instructional designer means them to learn. In a presentation given to the Association for Learning Technology, Professor Dylan Wiliam makes the point on the basis of research into learning rates in current schools:

We tested some kids over a five year period, asking them basic mental arithmetic tasks—they could actually make some notes—what is 860 plus 570? At age six and a half, fifteen percent of the kids can do it. At age eleven and a half, about ninety percent of the kids can do it. And I think most people would be surprised by how flat that line is. That every year, only about fifteen percent of the kids are getting this—fifteen percent! So in a class of thirty, six kids are getting it this year—one every two months…[22]

The low “hit rate” that one should assume is likely to accompany any teaching process emphasises the importance of what Dylan Wiliam calls “the pedagogy of contingency”:

Why pedagogies of contingency? Well as I said earlier, it’s because learning is unpredictable…And that’s why formative assessment is so important. It’s because we can’t predict the learning, therefore we have to monitor the whole quality of the learning constantly while it’s taking place. Now that’s [not?] just my opinion, but the research says it’s actually the most effective improvement you could make to teaching…[23]
…I think what’s interesting from the point of view of learning technology, is that most of the learning technologies have actually got stuck in the feedback only move. And that’s why the effect is disappointing. You get something like twice the effect when you actually find ways to do activities that close the gap[24].

Professor Wiliam is making two points.

  • First, the teacher needs constantly to be monitoring what students are actually learning rather than merely assuming that they are learning what they are being taught. This could be summarised in teacher-speak as “formative assessment”, in business-speak as “quality control” or, in ed-tech-speak, as “harvesting outcome data”.
  • Second, the best way of responding to evidence of a student’s misconception is probably not to persist in giving too many corrections, hints or tips—all of which disregard Professor Laurillard’s advice that coaches should “fade” the level of support to the student—but to move the student forwards so that they can realise the mistake that they have made through an alternative but more appropriately targeted activity.

Just as I have distinguished between intra-activity and inter-activity data aggregation, so the learning conversation occurs both within a particular learning activity and between different learning activities. The first case has already been described. In the second case (which according to Professor Wiliam, represents the more effective strategy), the “feedback” comprises not a correction but a progression pathway.

Professor Wiliam’s model is not so far from Professor Skinner’s model as might at first sight be supposed. Although Wiliam would argue that each learning step may not be as reliable as Skinner expects, and although a greater degree of “contingency” must therefore be allowed to cope with the unexpected, both believe that the intelligent sequencing of instructional material is fundamental. The realisation that the ideal sequence is unlikely to be pre-scripted, but in most circumstances will need to be altered on the fly, is reflected in the current interest within the ed-tech world in adaptive systems. McGraw Hill, for example, has recently launched its Smartbook, described as:

the first and only adaptive reading experience for the higher education market.

The analogy of the hi-fi system with its own embedded music collection needs to be made again in relation to this sort of technology. It is not enough to have an adaptive learning activity—the process of adaptive sequencing needs to be applied to all learning activities; and for this reason, the intelligent LMS or progression manager must be decoupled from the learning content or learning activity platform.

The discussion of progression management may seem to be based on the assumption that learning can be seen as an essentially linear process: that it is like getting a dog to jump through a succession of hoops or a donkey to walk to its water trough. In some situations, these linear analogies may work; but in more creative or expressive situations, learning might be more usefully compared to the exploration of a multi-dimensional space and the development of the ability to operate independently in such a space. What is more, everyone is likely to come to the learning process from different starting points.

In such circumstances, arguments about the efficiency with which progress can be made between two predetermined points need to be supplemented by mechanisms that respond to student preference and choice.

This is not to say that some management of activity will not generally be required. At the extreme, progression that is dictated by student preference is easy to manage: you just hand all the decisions over to the student and there is no need for either teacher or LMS to manage progression. This approach corresponds to an ultra-liberal doctrine, pioneered by educationalists like A. S. Neill at Summerhill School, and in the education technology sphere, by advocates of informal, playful learning like Professor Stephen Heppell. So far, these methods have not appeared to produce any very impressive results, as measured by traditional criteria or when compared to the more traditional approaches being applied in Asian countries. In most formal learning situations, student preferences will need to be considered in combination with the learning objectives specified by the curriculum, the student’s proficiency, and pedagogies whose effectiveness have been proven by empirical evidence.

Managing progression on the basis of the effectiveness of different learning activities

The third criterion for progression management should be fairly obvious. Some learning activities will be more effective than others and just as you would expect a careful doctor to prescribe a drug that had been shown in clinical trials to be more effective over a drug that had been shown to be less effective, so you would expect a learning manager (teacher or LMS) to recommend an activity that had been shown to be more effective at inducing learning than one that had been shown to be less effective.

An equally obvious caveat is that the effectiveness of different learning activities will depend on the student to whom they are assigned and the purpose for which they are being used. As the third of the three criteria only makes sense relative to the first two, so it is a kind of bridging criterion. It shows how the three dimensions of the Rubric’s Cube all come together in informing the progression decisions that will be made by an intelligent LMS.

The startling fact is that the kind of quantitative evidence of effectiveness that is required to make this kind of decision, and on which medical practice is so fundamentally based, is almost entirely missing from education. At the end of Education’s coming revolution, I quoted Chris Wormald, the Permanent Secretary at the DfE, telling a House of Commons Select Committee that:

The evidence base in education—not just in the department but more generally across the UK—is not as good as it should be. It is not as good as the evidence that is available to my colleagues in the Department for Health, for example[25].

For all the billions of pounds, dollars and euros that have been spent, in comparison with other sectors, the application of serious technology to education has barely even started.

Not only are large quantities of empirical data necessary to validate learning pathways based on the a priori analysis of the structure of a learning objective; it may even provide a sufficient basis on which propose an optimal pathway through a succession of different activities, even without an a priori analytical framework. It is increasingly common that doctors prescribe medication, not on the basis that they understand how it works, but on the basis that the empirical evidence shows that there is a statistical likelihood that it will work. On this same basis, intelligent LMSs using modern approaches to artificial intelligence and having access to large amounts of semantically meaningful data about previous student performance should be able to make secure recommendations about preferred learning pathways. And remember that, according to Professor Wiliam’s review of the research, in the scales of pedagogical effectiveness, good decisions about progression are twice as weighty as direct teacher attempts to correct student performance.

Even if data-driven systems might be able to make sound recommendations regarding student progression between activities the human teacher remains critical to the teaching process. The fact that intelligent LMSs need to blend machine-learning with human tutoring provides the third main function of the LMS in this architecture. As well as aggregating data and providing a first-responder recommendation for student progression, the LMS should provide a human-readable dashboard from which teachers, their line mangers, students and their parents can all monitor and understand the progress the student is making towards their learning objectives. The more data that future learning management systems have to handle, the more important will be the visualisation of this data in easy-to-understand forms that provide the basis for human decisions.

A technical architecture for technology-assisted, conversational learning

We can now complete the succession of diagrams showing the nature of the conversational feedback loop in the instructional process. At the centre of the final diagram is the intelligent LMS, as described above.

Figure 22: an architecture for technology-assisted, conversational learning

The student engages in a primary “conversation” in the context of a digital, physical or verbal activity. This involves a series of interactions involving a cycle of performance, feedback and reflection.

In the case of a digital (or digitally moderated) activity, the Learning Activity Platform (LAP) reports summary outcome data to a Learning Management System (LMS), which responds to such reports with three primary actions:

  • it aggregates the data in order to create durable profile information about the student, the activity, and the pedagogy being used to meet the current learning objective;
  • it presents the data to the teaching team in a form that can be readily understood and acted upon, allowing the team easily to monitor the student’s performance and learning;
  • it manages the progression of the student by assigning or recommending further activities, or introducing new stages of the current activity, assigning the student as required to new ad-hoc groups of fellow students.

The teaching team can respond to the information being provided by the LMS:

  • by controlling the LMS management functionality (for example, by manually assigning work, overriding LMS default behaviour, or inputting assessment data based on personal observation);
  • by interacting with the student directly, initiating a conventional tutorial conversation or adjusting its approach to classroom teaching.

In addition to what I have described above as its three primary functions, the intelligent LMS should take two secondary actions.

  • The reliability of statistical analysis will depend on its ability to predict future student performance with statistically significant degrees of accuracy. This process is shown in the diagram by an elliptical predict-validate loop, by which the LMS will constantly check the predictive reliability of its profiles, the reliability of the LAPs and other sources of raw performance data, and the reliability of the computer models by which the raw data are aggregated.
  • The intelligent LMS, having aggregated and validated data that is used in the day-to-day management of teaching transactions, will also be responsible for vertical reporting to government, to the developers of software, and to academia. These vertical data flows will provide the basis on which to improve the general administration of education, the production of improved software, and the understanding of pedagogy. The circulation of data outside the original learning institution obviously has important implications for the privacy of teachers and students—and these are issues that I shall address in future posts.

Some might recoil in horror from a picture of a teaching process mediated in this way by machines. To these people, I would repeat the caveat, made earlier, that the management of progression will not always be linear. It is always easy to give more freedom to the student to choose their own pathways but such freedom will normally need to be constrained to some extent. The management of progression (and the freedom given to a student operating within any particular activity) are about determining the boundaries within which the student is to be constrained and about the extent to which learning objectives are specified by which the student will be animated. These are the aspects of formal education that require management. Experience will establish the correct balance, according to circumstance, between constraint and freedom. Then we will know how broad our roads and how large our enclosures should be.  But it is only the most radical of deschoolers that would argue for an instructional landscape with no paths and no fences at all. The argument made by these people (who appear to exist in surprisingly large numbers on the social networks) is not in all honesty against the use of systematic, machine-mediated pedagogies: it is against having any type of school at all.

It is also worth repeating that in comparison to traditional classroom teaching, machine-managed instruction is likely to lead to a greater degree of personalisation and a greater amount of one-on-one interaction with teachers.

Others might question the capacity of computers to make useful decisions based on data, with which teachers often seem to have an uncomfortable relationship. It is hard to prove the point empirically so long as our current education system is so poorly furnished with data monitoring learning processes. Until such empirical proof is available, I would make the following points.

  • Learning is inherently susceptible to measurement and, whether the opponents of data-driven systems like it or not, we already spend a lot of time measuring its outcomes. The problem has not lain in our ability to discriminate between processes in which the expectations of participants are achieved and those in which they are not; but in what we can do to improve the processes that fail.
  • The model does not propose that the LMS should take over control of the student’s learning—but rather that it should assist the monitoring of student performance, making a “first responder” attempt to guide student activity along the most productive pathways, and launch (in response either to automatic or manual assignment) appropriate digital learning activities.
  • In these circumstances, even if the machine gets it wrong (as it undoubtedly will from time to time), the fact that any attempt at all is being made at personalisation is likely to represent a significant improvement on the status quo. Even in the case of error, the fact that the LMS might, for example, suggest the need for remedial intervention can only help focus the minds of the teaching team on addressing the problem and arriving at a better diagnosis. It is, moreover, a commonplace of the modern operating theatre or airplane flight deck that human control is mediated by machines. Human operators monitor information presented to them by machines and may use machines to automate routine functions—but at all times it is the humans that retain ultimate control.
  • If machines might be fallible, so are human teachers. The ability of the LMS to monitor the effectiveness of human teachers, their assessments and their interventions will also allow for more effective management processes across the institution.
  • As argued in my previous post, Education’s coming revolution, the creation of learning activities, courses and monitoring systems that are available across the institution will open the way for team (rather than individual) teaching, ending the harmful isolation of the single classroom teacher.
  • The process of automatically collecting and aggregating data will in itself provide the evidence (so signalling lacking at the moment) for which combinations of pedagogy work best, allowing these to be duplicated across the education system. It will also provide evidence on the effectiveness of the general approach that I am advocating.
  • While the general model assumes that the teacher is in overall control of students’ learning, it does not prescribe the nature of individual learning activities. There is no reason why outcome data should not be harvested from activities grounded in a wide variety of pedagogies: creative, social and exploratory as well as more formal types of assessment.

Conclusion

In my previous post, Education’s coming revolution, I argued that the fundamental problem with education was the endemic shortage of suitably qualified teachers and that the only plausible answer to this long-term problem lay in a more systematic approach to the management of schools, particularly from the point of view of the pedagogical processes that represent their core business.

In this blog, I have outlined what I believe to be the fundamental patterns which ought to characterise that more systematic management process. What is the most fundamental design pattern of all—the feedback loop—I have characterised as the “learning conversation”.

In reaching this conclusion, I am treading in the same footprints that have been made by a long succession of educationalists before me, from Socrates, through John Stuart Mill, to Skinner, Piaget, Papert and contemporary academics like Wiliam and Laurillard. I agree with the conclusion reached by Professor Laurillard in her recent Teaching as a Design Science, a book that I have referenced frequently in my recent posts, in which she discusses the hitherto unrealised potential for education technology.

The promise of learning technologies is that they appear to provide what the theorists are calling for. Because they are interactive, communicative, user-controlled technologies, they fit well with the requirement for social-constructivist, active learning.

Yet in spite of this potential:

the empirical work on what is actually happening in education now that technology is widespread has shown that the reality falls far short of the promise.

The lack of evidence of effectiveness has often been excused by those whom Professor Laurillard calls “technology opportunists”[26], who try to use technology as a platform from which to attack the whole notion of formal learning. They see no problem in the lack of formal evidence of effectiveness because they do not believe that education should be set the predetermined goals against which effectiveness can be measured. In taking this line, they turn their back on an approach which tries determine which pedagogies work and in what circumstances.

Professor Laurillard seeks to reassert a technocratic rather than ideological approach to education technology by focusing on what she calls the “conversational framework”, a communication cycle that represents:

the teacher’s role in aligning goals, monitoring conception, and fostering conceptual knowledge.

While Professor Laurillard’s book investigates the nature of that cycle at a pedagogical level, my purpose in this post has been different.

On the basis of Professor’s Laurillard’s parallel proposal that teaching is a kind of engineering, capable of producing design patterns that can be abstracted and generalised, I have argued that these pedagogical principles can be encapsulated in formal software systems capable of being replicated across the education system. At the architectural level, these systems will prescribe neither pedagogy nor curriculum. They can be adapted to the needs of particular students, particular courses, and to the emergence of innovative new software components. While Professor Laurillard explores some of the pedagogical principles that might underlie such future innovation, my purpose has been to outline the nature of the software infrastructure that will be required to accommodate it.

Conversation is required, not only at the pedagogical level but also at the technical level. The intermediation of machines in the conversational framework requires that many different types of software need to work together. In my discussion of a new generation intelligent LMSs, I have predicted that these management systems are likely comprise many different components, such as learning record stores, student record systems, progression management systems, and learning analytics systems. Even if all these functions were provided by a single piece of software, it would still be necessary to connect these systems to many different types of learning activity platforms—the software that will drive the many different kinds of instructional activity that will be required to drive learning across the curriculum.

At a systems level, the life-blood of the educational conversation is data. Just like blood, if data is to nourish a dynamic, living system, then it must circulate. The key barrier to the realisation of the integrated systems and processes described in this post is the persistent lack of data interoperability—and this is the subject to which I shall turn in future posts.


Previous related posts

The forging of the rings in Industrial Britain, from the opening ceremony London Olympics, July 26, 2012 - Source: Quinn Rooney/Getty Images Europe
Industry’s coming revolution makes the case for education technology, arguing that it offers the only plausible solution to the endemic shortage of teachers in universal education systems. The piece makes extensive reference to Kim Taylor’s 1970 book, Resources for Learning.
What do we mean by “content”? analyses the use of this poorly-defined term, which often excuses low-grade, non-interactive, information-bearing resources. The post makes the argument that there are many types of content and the sort we really need bears not information but activity.
Home page, with a full listing of posts on this blog.

Notes

[1] Mill, J.S., On Liberty, 1860, http://www.constitution.org/jsm/liberty.htm.

[2] Lang, R.D, Self and Others, 1976, Tavistock Publications.

[3] Papert, S., Mindstorms, First published 1980, New York. Second edition, Basic Books, 1993.

[4] Ibid, p.157.

[5] Ibid, p.7.

[6] Ibid, p.121.

[7] Papert, S, op. cit. p123-4.

[8] Plato, The Republic, Book 1.

[10] Papert, S., op cit, p.23.

[11] Diana Laurillard, op. cit. p.176.

[12] See my earlier post, Stop the IMLS Framework

[13] Papert, S., op. cit., p.5.

[14] Ibid., p.115.

[16] If you are puzzled by the use of the term “content” in this context, see my previous post, What do we mean by “content”?

[17] Papert, S. op. cit., page 5.

[18] Conrad, J.,

[19] Professor Dylan Wiliam, Assessment, learning and technology: prospects at the periphery of control, keynote speech by Dylan Wiliam, Deputy Director of the Institute of Education, at the 2007 Association for Learning Technology Conference at http://www.alt.ac.uk/docs/altc2007_dylan_wiliam_keynote_transcript.pdf.

[20] Wiliam, D., op. cit., p.5.

[21] Laurillard, D. op. cit. p.73.

[22] Wiliam, D., op. cit., p.4.

[23] Ibid, p.6.

[24] Ibid, p.6.

[25] Evidence given 23 January 2013 by Chris Wormald, Permanent Secretary at the DfE, to the Parliamentary Select Committee for Education.

[26] Ibid, p.4.

One thought on “In the beginning was the conversation

  1. Pingback: On Cows and Spirals | My Learning Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s