The renewed interest in the curriculum is welcome – but our public discourse is still confused
Everyone is talking about the curriculum again. Tim Oates has been arguing the importance of the curriculum for some time and, having chaired the Expert Panel on the Curriculum in 2011, helped provided the justification for the review of the National Curriculum in 2014. But now it is being said that the 2014 Curriculum Review did not finish the job. Although I agree, I think the current discourse is still horribly confused. My main argument is contained in my Why Curriculum Matters. In this three-part follow-up series, I shall look at the positions taken by Amanda Spielman, the Chief Inspector of Schools, and John Blake, Head of Education at Policy Exchange, before sketching out what would be my policy recommendations for a new government focus on curriculum.
Amanda Spielman’s argument
Amanda Spielman, the Chief Inspector of Schools, made a series of high-profile statements about the curriculum in the autumn of last year, including, for example, her speech at the ARK academy trust conference. But her main statement was her HMCI commentary.
Spielman argues (along with much of the teaching profession) that the curriculum has been narrowed as teachers have focused on teaching to the test, driven by measures of school accountability that do not serve the interests of children, and continues that the problem is exacerbated by “a lack of clarity around the language of the curriculum” which has led to “competing notions of what curriculum means across the sector”. The end-result has been “a weak theoretical understanding of curriculum” in schools.
Spielman argues that teachers should be given the freedom to place more emphasis on the curriculum, which (if you understand the term properly) encapsulates “the substance of education”. They should not be forced to teach to the test, which “can only ever sample the knowledge that has been gained” and does not test “the whole domain that is of matter to the pupil”. They should stop being pressured into “mistaking ‘badges and stickers’ for learning and substance”. She ends by calling on teachers to “help by building curriculum expertise within your school and MATs” – reversing the previous decline.
My critique of Spielman’s position
Spielman’s definition of “curriculum”
I agree with much that Spielman has said. I agree that the lack of a common definition of “curriculum” is crippling. When it comes to her specific argument about definitions, I also agree with her that “the timetable…is not the curriculum”. I make the same point at slide 38 of Why curriculum matters.
When it comes to a more explicit definition of curriculum, Spielman is not quite so clear – but it appears that her working definition is the same as mine: curriculum is about what we choose to teach: “at the very heart of education sits the vast accumulated wealth of human knowledge and what we choose to impart to the next generation: the curriculum”. Throughout her article, Spielman uses the term consistently with this definition.
This itself is highly significant, given that Spielman’s understanding of “curriculum” is not consistent either with the definitions given by the 2011 Expert Panel on the curriculum (i.e. Tim Oates and Dylan Wiliam), nor with the definition given by Sean Harford, her own deputy, whose “working definition” of “curriculum” is given at slide 10 of this deck, presented by Sean Harford at the Festival of Education at Wellington College, 2017.
FIGURE 1
It is worth noting that Harford refers to this as a “working definition”. This phrase acknowledges the lack of generally recognised definition and suggests that this definition has something temporary about it. Harford’s temporary definition is, broadly speaking, the same as the Expert’s Panel: it defines the curriculum as a framework that subsumes many different things: intent (what we want to teach), implementation (how we choose to teach it), and evaluation (our assessment of how successful we were). As argued in my Curriculum Matters, such a broad and multi-faceted definition does not help with our task of systems design, where different elements of the system need to be clearly differentiated.
So, although Spielman uses the term “curriculum” consistently in her HMCI Commentary, it is still worth saying that Spielman has not explicitly recognised the fact that she is defining “curriculum” in a different way to the Expert Panel and the rest of Ofsted. And indeed, her own use of the term is not entirely consistent in other places: in her speech to the ARK conference, given a month after she wrote her HMCI Commentary, Spielman adopts Sean Harford’s terminology and seems to imply that curriculum is about implementation as well as intention, “in academies that use their freedom over the curriculum, we have been looking at how the curriculum intention – what a school wants children to learn – is being translated into practice”. So, despite what I regard as the helpful way in which Spielman talks about the curriculum in general, I think that she still has some loose ends to tidy up.
I have two much more important criticisms of Spielman’s position.
The seat of curriculum expertise
First, the whole thrust of Spielman’s call to action revolves around the need to develop more curriculum expertise in schools, and for schools to develop more clarity about their curriculum intentions. I see five objections to this position.
- It is not clear to me why thousands of different schools should devise different curriculum intentions. Even if there might be arguments for variation in our curriculum across the system (for children of different abilities, different aspirations, or different cultural backgrounds) I cannot think of any good reason why this variation is so great, or so consistent within a single geographical location, that it makes any sense to address it at school level. Children in Truro do not need to learn a different sort of Maths to children in Hull.
- The divergence of objectives between schools, even if there were some justification for it, would make it more difficult to create pedagogical resources centrally, leading to a perpetuation of our current, highly decentralized, highly inefficient model of resource creation. It would also create increased logistical difficulties as children and staff moved between schools.
- It is not clear to me that schools have the resources or expertise to tackle this very challenging task, or that delegating this very significant responsibility to them can do anything but enormously exacerbate the current crisis with workload.
- Alongside the question of resources and expertise is the question of legitimacy: although the providers of education may have a role in proposing educational objectives, particularly at low level, ultimately (as argued at greater length in Why Curriculum Matters) it is not for the providers of a service to define the purpose of that service. If the definition of educational objectives is to be delegated to schools, there is much that then needs to be said about how schools should be held accountable for those decisions to the rest of the community, which is the ultimate source of legitimacy for decisions regarding educational purpose.
- I think that it is wrong to imply that the lack of curriculum expertise is a problem that is confined to schools. As I have argued in my Why Curriculum Matters, the problem is not that teachers have neglected their study of what the experts are saying; the problem is that the experts themselves are confused. If the experts cannot sort out what curriculum means, there is not a cat-in-hell’s chance that thousands of isolated schools will be able to succeed.
The relationship between curriculum and assessment
My second significant disagreement with Amanda Spielman is with her assertion (which I admit that she shares with pretty much the whole teaching profession) that testing is part of the problem and not part of the solution. Spielman’s analysis repeats the widespread observation that “teaching to the test” has narrowed the curriculum, that too many schools have “prioritised testing over the curriculum”, and that there has been an undesirable “merging of the concepts of testing and the curriculum”. This seems to be the main respect in which Spielman thinks that curriculum expertise needs to be improved.
Part of Spielman’s argument here is drawn from the work of Daisy Christodoulou, which is in turn based on the Daniel Koretz’s Measuring up. This is to draw a distinction between the domain (i.e. everything that the curriculum intends that should be taught) and the sample that is drawn from the domain, which refers to those specific matters that come up in the test. “We must not forget”, Spielman writes in her HMCI commentary “that any test can only ever sample the knowledge that has been gained. It is the whole domain that is of matter to the pupil”.
There are two fundamental problems with this argument.
The relationship between sample and domain
First, it is not helpful to say that the sample is different from the domain without at the same time acknowledging that the whole point of a sample is that it should represent the domain that it is sampling. There are numerous measures that are taken by academic researchers, political pollsters and market researchers to ensure that the samples they use are representative. They do not always succeed of course (as recent election polls show) but the reactions to such failures are to ask what went wrong, not to say “oh well, we never expected our poll to be accurate because the sample and the domain are different things”.
- The measures that are taken to ensure that the sample is a fair representation of the domain are, broadly speaking, threefold:
the sample must be sufficiently large to minimize the distorting effect of outliers; - the selection of the sample must to some extent be randomized and unpredictable…
- …but within constraints that ensure that the sample nevertheless represents a relatively even coverage of the entire domain.
In the case of political polling, this third measure might involve checks to ensure a reasonably even geographical distribution and an even distribution of socio-economic background. In the case of the selection of items for an examination, it might mean that you can be sure that most topic areas in the curriculum will be represented in the exam in some form, along with a variety of different item types, while not being quite sure of the exact questions that would be asked.
In the case of an academic curriculum, the concept of a domain refers not only to the range of information that the student is required to memorize but also to the sorts of task that the student is required complete and the sorts of questions that they are able to answer. The need to randomize question forms has not necessarily been satisfied in recent years. Randomising the form of questions makes the exam more unpredictable, more difficult and, some people might argue, less fair. As soon as an exam starts to vary the form of the questions being asked, it becomes more likely that some students will not understand what they are required to do, possibly with disastrous consequences for them individually. Many people might argue that the student’s inability to respond to unfamiliar forms of question represents a failure of clarity on the part of the examiner. The results of such an unpredictable format will certainly be a wider spread of results. Where an unfamiliar question form is used, some students might have got lucky by coming across such a question form before while others who had not been so lucky in their preparation might be completely flummoxed. This creates more “noise” – more unexpected outcomes – in the results data; and where the success or failure of a student depends heavily on the outcome of a single examination, such “noise” results in unfair grades.
The attempt to build reliability into summative assessment by standardising those assessments has created assessments that are predictable and, by virtue of their predictability, poor representations of the overall domain. The reason why students can be coached in the taking of the test, rather than taught the content of the whole domain, is that the format of the exam is predictable.
There are two reasons that we have tried to increase the reliability of summative assessment by increasing their predictability:
- until recently, we have not had the analytical tools to assess the significance of data after its collection by testing the extent that it is corroborated by other data;
- even with the benefit of data analytics (and the ability to deploy them in education), our summative assessments are so short (i.e. the sample of items is so small) that it is not possible to deploy them, or to increase reliability in the way that researchers and pollsters normally improve reliability, which is by increasing sample size.
The problem is not testing per se, nor “teaching to the test” per se – the problem is our use of predictable forms of standardised testing, in which we create formulaic tests to compensate for our use of inherently unreliable, single-shot assessments, with very small sample sizes.
This is why I have argued in Why curriculum matters that we need to take a new approach to assessment, using digital technology to track almost everything that students do, merging formative and summative assessment, increasing our sample size, coverage and the unpredictability of our assessments. Once we have done that, we will find that our sample of assessed questions will correspond closely to the whole curriculum domain and “teaching to the test” will no longer be a problem.
Defining the domain
The second problem with Spielman’s argument that our understanding of curriculum is being undermined by “teaching to the test” is that without understanding the close relationship between curriculum and testing, we cannot understand what the domain comprises. Where we cannot define clearly the content of the domain, any injunction to pay more attention to the domain is meaningless.
In both my Why curriculum matters and my earlier Rise and fall of criterion referencing, I have covered the criticisms that have been made by Dylan Wiliam and Daisy Christodoulou of the attempt to define precisely our learning objectives (or learning intentions), which Amanda Spielman and I both appear to believe to be the stuff of the curriculum. I have argued that Wiliam and Christodoulou are mistaken to suggest – or at least to encourage teachers to conclude – that the attempt to define our learning objectives was mistaken. The fault lies not in the attempt to define our educational objectives, but in the ineffective methods that were selected to achieve this task. Instead of relying on gnomic rubrics, we need to define our learning objectives by exemplification. When we talk of exemplars, we must ask “exemplars of what?” – and the answer to this question is “exemplars of performance”. That is why nearly all learning objectives, however we might talk of knowledge, understanding or skill, can be reduced to capability: the capability to produce certain sorts of performance – and why attacks on “performative measures” or “performative mindsets”, based on the writings of Robert Bjork and Carol Dweck, have not been helpful.
When Spielman echoes the majority of the profession by saying that “schools should view the tests as existing in service to the curriculum”, she oversimplifies the relationship between curriculum and assessment. She suggests that curriculum is prior to assessment. In truth, the curriculum can only be clearly defined by reference to assessment. This relationship is circular – or at least iterative – it is not hierarchical.
Summary of my critique of Amanda Spielman’s statements on the curriculum
To summarise my critique of Amanda’s comments on the curriculum, I welcome the renewed emphasis on the curriculum that she has provoked and I welcome her use of the term to refer to our objectives or intentions, rather than to the pedagogical means by which we hope to achieve those objectives.
In an ideal world, I would hope that she could be clearer about her definition of the term and about the difference between her use of the term and the way that it is used by the rest of Ofsted, and the way that the Expert Panel used it in 2011.
More substantively, I disagree with Spielman’s assertion that curriculum development is a function of schools, rather than being a task that ought to be tackled more centrally and with reference to the requirements of the rest of our community and the expertise of non-teachers; and I disagree with her argument that it is possible to realize a more satisfactory approach to the curriculum that is independent from (and perhaps an alternative to) a focus on assessment.
I have three observations:
The curriculum shouldn’t especially differ by objectives but by contexts. I teach in Yorkshire so we can study the Victorians in great variety. Romans are harder than if we lived near Hadrian’s Wall. We should teach both but our local context and history feeds the curriculum.
Curriculum objectives can be informed by context. As a school who teaches children at risk of CSE, we should teach children about risk and understanding of self. Every school should but we know our role.
Finally, a clear example of how assessment is divorced from curriculum. The government presented a maths curriculum for every year group in 2014. However, they then wrote a narrower interim assessment framework to inform the SATS tests. This would lead any year 2 or 6 teacher to miss out the whole year curriculum and teach to the test.
Thanks for comment, Gareth.
Before I answer your three points, bear in mind that I am defining “curriculum” to refer to our learning objectives. I use “pedagogy” to refer to the means of instruction and terms like “programme of study” or “scheme of work” to refer to the materials chosen to deliver those curriculum objectives according to appropriate pedagogical principles.
1. Given these definitions, I see “context” as being about pedagogy, not curriculum. The 1987 TGAT report is good on this, e.g. para 31: “any particular unit of [assessment] information will depend heaviliy on the context in which the information is gathered”; para 61: in assessment “the effects of context…are…likely to be confused with more fundamental differences in pupil performance”.
Context is normally required in order to render concrete what would otherwise be very abstract concepts and skills. It is by the variation of context, both in teaching and assessment, that the abstract skill is grasped. So I would see context as a sort of pedagogical substrate or medium through which teaching occurs, without itself being a learning objective.
The 1988 National Curriculum took this to an extreme in the case of history by separating skills from factual knowledge, the message being that you could teach any skill through any factual material. This was not really true and it drove the dividing line between learning objective and context very high up the scale of abstractedness.
So I accept in the case of History, “context” is quite a meaty thing. Studying the Victorians as opposed to the Romans is more significant that choosing to practice your division in the context of counting change as opposed to sharing sweets. And this has long been recognised in History exams, which traditionally give a wide degree of choice, depending on what period you have chosen to study.
So: a) I accept that local history is to some degree an exception that proves the rule.
b) Nevertheless, this does not amount to an argument for curriculum development by schools. There are many schools in Yorkshire; many more in areas in which an in-depth study of Victorians makes more sense than an in-depth study of Romans. It makes no sense at all for all those many schools, wanting to do the same sorts of thing, should all reinvent the wheel. The fact that schools should get to choose which curricula they offer does not mean that they need to develop those curricula themselves.
c) Even if you take a particular local resource, for e.g. in York, it would make much more sense for pedagogical materials to be developed by a local museum, for use by all local schools, than for those resources to be developed separately by each of the local schools.
2. I give the same reply to your point about point 2. CSE (like Victoriana) occurs in many areas of the country. Some of the educational responses to CSE, like developing a sense of self worth and risk, are required everywhere. It therefore makes sense to articulate a curriculum in this area that can be offered by schools across the country, even if it will often be schools that decide whether to offer that particular curriculum.
3. I would suggest that you are muddling “is” and “ought”. The fact that assessment and curriculum are, as I argue, closely bound together, means that the narrow scope of the interim assessment framework represents, de facto, a narrowing of the curriculum. Conversely, curriculum rubrics that are not backed up by assessments are, I argue, meaningless. It is not just that teachers choose not to teach to those rubrics because they are incentivised by school league tables, the problem is often that without the exemplification of performance that assessment provides, they do not understand what those curriculum rubrics even mean. So what I am saying is that we need to build an assessment framework that exemplifies the whole curriculum and not just a small part of it.
In summary – I think you think I am advocating central control of the curriculum and of programmes of study. While this true to some extent of the curriculum, with respect to the core of educational objectives defined in the National Curriculum, with respect to the rest of the school curriculum, I am advocating centralised development by third-party suppliers, whose offerings can be adopted by schools under their control. With respect to programmes of study, these would all be developed by third-party suppliers and adopted by schools under their own control.
Even where a programme of study is developed externally, there would still be considerable scope for schools to contextualise and mix-and-match third-party resources. This is essential in order to allow resource-driven activity to be blended with teacher-led classroom activity (the importance of which I am not in any way deprecating). This is why I have adopted the term “tools of the trade”: it is for the teacher on the spot to decide whether they need a chisel or a saw but that doesn’t mean that they have to make these tools themselves. Software is particularly good at allowing this sort of parameterisation. What is difficult is the development of educationally compelling activity. Once a particular sort of activity has been developed, teachers and other intermediaries (museums, academic course developers etc) can easily develop, share and sequence individual instances of those activities. I am against edtech software that binds students into long, uneditable sequences of activity outside the teacher’s control.
I hope this helps to clarify my position.
Crispin.