Two twos are four

times tablesWhy the introduction of a computerised maths test may mark a new beginning for edtech in the UK

At the end of my review of the ETAG report in May, I confidently predicted a new government edtech initiative:

“So favourable are the circumstances to the launch of such a significant new initiative that if it does not happen I will join a well-known member of the Liberal Democrat Party in eating my hat”.

Well, no such initiative has been forthcoming and, while Lord Ashdown reneged immediately on his hat-eating promise, I have not been intending to cop out so easily. During my Christmas shopping, I could not help eyeing up the hat department for something that might prove reasonably digestible.

But the new year has brought news that offers at least a glimmer of hope that such a drastic step might be avoided. The government is introducing a computer-delivered test of multiplication tables as part of the KS2 SATS tests.

How significant is this initiative? Is it an irrelevance or a false dawn? Or could it be the first, faint pulse of a new government policy on education technology?

Criticism of the policy

The backlash

No government announcement on education is likely to pass without a barrage of rotten eggs being thrown. A prep school teacher wrote in the Telegraph that:

Teachers should have the freedom to use time more productively rather than putting pupils through a times-table test.

According to an article in the Birmingham Mail, NASUWT criticised the tests as:

an “overly simplistic” way to measure pupils’ abilities,

…adding that as times-tables were already in the Y6 curriculum:

assessment of them was always going to be included in the statutory key stage tests,

…and that the introduction of another external assessment showed that this was:

not about supporting teachers to use their professional skills and expertise to meet the needs of learners, but is about finding yet another overly simplistic, punitive measure to use against schools and teachers.

While the NASUWT said that times-tables were already being adequately assessed, Christine Blower of the NUT suggested that learning times-tables was unnecessary because students could look up the answers online:

Looking up your times tables is very easy to do. So the other thing we have to do is to make sure that children and young people use the computing ability on their mobile phones so they can get that at their finger tips. Recall is not the only way to make sure you understand mathematical concepts.

Ms Blower also implied that the tests would impose an inappropriate way of learning the times tables:

Children will learn the times tables in an appropriate way if they are taught in an appropriate way.

While the teaching unions were busy criticising the initiative, others characterised it as too little too late. In an article titled “At last! All 11-year-olds to face times tables tests”, The Express Quoted Chris McGovern of the Campaign for Real Education saying that:

the move does not go far enough…Britain is already three to four years behind the most advanced education systems in the world…children should be tested on their times tables at age seven, not 11.

This criticism was supported by a wide range of commentators on the Daily Telegraph site, such as “ArtySin” and Michael Corby who said, respectively:

Wow, only up to the 12 times table. We had to learn up to the 16 times table in the late ’50s when aged nine.

Good grief! at my primary school after WWII we were expected to have done the tables up to 12 times by aged 6.

In this section, I will deal with these different criticisms.

Poverty of ambition

Those who criticise this policy as too little too late are really criticising what has happened in the past rather than the introduction of the new policy. “We are where we are” and only the future will tell how far the new policy will go. If the tests are successful at age 11, they could later be brought forwards for those who can master the material at an earlier age.

Gradgrind

Some complain that it is unnecessary to learn facts by rote, as such knowledge is unnecessary to a more fundamental understanding of Mathematics. This sort of knowledge is simplistic, they say, and the method is mechanistic.

This view is not consistent with research. In Why students don’t like school, psychologist Dan Willingham asserts on p.35 as one of his key points that:

Factual knowledge must precede skill.

Although factual knowledge might need to be rehearsed in the context of more complex problem solving activities, if factual knowledge precedes skill, then it follows that skills cannot be exercised unless basic factual knowledge has first been acquired to some extent, and that this can only be done in the first instance by rote learning.

The thorough investigation into children’s understanding contained in Children’s Understanding of Mathematics: 11-16, supports Dan Willingham’s view:

It is very difficult to disentangle understanding from rote procedure in computation (p.23).

If “rote procedure” is intrinsic to understanding, Christine Blower is wrong to assert that recall can be replaced by looking things up or using mechanical assistance—and this conclusion is also supported by the CSMS research:

Where decimals were included in the problems…hardly any 12 yr olds and under 10 per cent of 15 yr olds would consistently be able to press the buttons on their calculator in the correct order in solving simple division problems (p.47).

In conclusion, the results of the CSMS research give:

The overwhelming impression…that mathematics is a very difficult subject for most children…[For] fifty percent of our secondary child population…any demand for abstraction or even the formulation of a strategy for solution is beyond them. In the secondary school we tend to believe that the child has a fund of knowledge on which we can guild the abstract structure of mathematics. The child may have an amount of knowledge but it is seldom as great as we expect (p.209).

The foundation of that hierarchy of understanding rests on factual knowledge, which has in the first instance to be drilled by rote learning. In criticising rote learning as simplistic and inappropriate, Christine Blower is just plain wrong.

Quite apart from creating a foundation for the understanding of mathematics as an academic subject, mental arithmetic is undeniably helpful for everyday life, as illustrated by another anecdote in the Telegraph comments section:

We had to learn up to the 16 times table in the late ’50s when aged nine. What’s funny is when you use this knowledge at work or at a supermarket checkout, in the presence of a young person and they look at you as though you’re Einstein. It ‘ain’t difficult but it’s bloody useful still. (ArtySin).

Balance

A more sophisticated anti-Gradrindianism talks not about rote learning as an absolute wrong, but about an excess of factual learning as an imbalance. The introduction of a times-tables test privileges a certain sort of mechanistic learning at the expense of other, more worthwhile objectives. As stated in a comment on the Daily Telegraph report:

Prescription of tables as a measure of progress will lead to undue focus on an undoubtedly important area of maths, but at the detriment of a wider understanding. (nosmokewithout)

Although I don’t think this objection should be dismissed out of hand, I think that nosmokewithout’s objection is wrong for three reasons.

1. Because (as argued above) the relationship between “wider understanding” and factual recall is complementary, not antagonistic. The more mechanistic factual and procedural learning is generally a prerequisite for deeper understanding.

2. If you represent the “zero sum game” (between mechanical skill and “wider understanding”) not in terms of human memory (why fill up your head with irrelevant facts) but in terms of time taken out of a busy school day, then I answer that the most important limiting factor to educational achievement is not time but motivation. Many teachers see motivation as a prerequisite for learning—but as Dylan Wiliam argues in Embedded Formative Assessment, motivation is often better seen (in Chapter 7 (Activating Students as Owners of Their Own Learning)/Self Regulated Learning/Motivation):

not as a cause but as a consequence of achievement.

This is because, as I argued at How education will revolutionize research, increasing mastery is one of the most fundamental motivators that there is.

This is also highlighted by Dan Willingham (op cit):

Thinking is slow, effortful and uncertain (p.5)…[yet] people actually enjoy mental activity…Solving problems brings pleasure (p.9)…In the last ten years neuroscientists have discovered that there is overlap between the brain areas and chemicals that are important in learning and those that are important in the brain’s natural reward system (p.10).

The advantage of rote learning is that that, while it may represent only the first rung on a long ladder of learning objectives, it represents a simple and accessible form of mastery which can start a virtuous circle of achievement and increasing motivation.

The experience of increasing mastery, achieved through learning times-tables, may increase the attainment of more complex objectives, not only because of the importance of prerequisite knowledge gained, but also as a direct consequence of increasing motivation.

3. Given that nosmokewithout’s objection is about balance, one has to start with an assessment of where we start from. Does the current education system place too much emphasis on what is often poorly defined “wider understanding”, or on the mastery of basic factual and procedural prerequisites? While this is not the place to go off on a major diversion, I believe that there is strong evidence that our current fault is to underestimate the importance of factual and procedural knowledge. In this context, the emphasis on mastering multiplication tables by the age of 11 is a perfectly reasonable corrective.

The argument about balance can also be framed as an objection to “teaching to the test”. Too much testing may be said to distort the syllabus to focus too much on those things that are easy to measure and not enough on those things that are important but harder to measure. But following on from the argument above, this objection is not valid in this case because learning your times-tables is a valid learning objective in itself—not something that we are doing simply because it is easy to test—and there is no reason to suppose that a simple test cannot easily be constructed that will provide a valid assessment of that objective. This is one case at least in which “teaching to the test” is not a problem.

Set up to fail

Some people object to the demotivating effect on students who are bound to fail. Learning tables is just not their thing. Quoting again from comments on the Telegraph site:

Some pupils will never learn their tables no matter haw much the secretary of state for education wants them to. (nosmokewithout)

I’d like to see them try and get my child to learn the 12 x table. She tries harder than anyone else in the class, but is dyslexic, as are others. Much as the aim is noble, unthinking goals, pursued robotically, can be a good way to make children feel worthless. (D M)

Just another monumental flaw in our education system designed to alienate those that are not mathematically minded (Margo Vella)

Many of these comments are based on a perception that attainment is dependent on innate ability and not on good teaching or hard work. This fatalistic attitude is increasingly being blamed for the poor performance of Western education systems, particularly in comparison to Asian countries. As the 2012 PISA report states (p.252):

The PISA 2012 assessment dispels the widespread notion that mathematics achievement is mainly a product of innate ability rather than hard work… the observed variation in mathematics performance is closely related to students’ beliefs about the importance of self-concept, effort and persistence for their performance in mathematics. The fact that those beliefs vary significantly across schools and countries suggests that they can be shaped by education policy and practice. These findings should inspire education policy makers to move away from the notion that only a few students can achieve in mathematics towards one that embraces the proposition that all students can.

Such an attitude does not ignore the severe disadvantage of falling behind and the relative difficulty of teaching children with low levels of prior attainment. But it does suggest that we should set minimum standards of attainment that we should expect everybody to reach, and to provide the programme of effective teaching that are required to make that expectation achievable.

Trusting teachers

What such a programme of effective teaching requires is frequent practice. Remarkable as it seems, most teachers do not understand this point, which ought to be a fundamental principle of their professional practice (I cover this point in Why teachers don’t know best and in Tim Oates: assessing without levels). It is only when the extent of this misconception is realized can we understand the size of the metaphorical supertanker that Nicki Morgan is trying to turn around.

Some of the objections to the new tests are based on the perception that the policy demonstrates that the government does not trust teachers. Although this is obviously an undesirable situation, my argument above suggests that government may well be right not to trust teachers.

In fact, I think that the main problem with a strategy based on testing is not that it displays an excessive distrust of teachers—but on the contrary, it shows an excessive trust that teachers will pursue the pedagogies that are required to enable their students to succeed in the new test. If they do not respond, then having yet another externally-imposed test for your students to fail is not going to help anyone.

Edtech as a policy instrument

Pity the poor Minister

The fundamental problem faced by any British Minister of Education is the lack of effective policy instruments at their disposal to effect change.  Ever since James Callaghan’s 1976 speech at Ruskin College Oxford there has been a long-running stand-off between educationalists and governments of all colours. Ministers have wanted to improve standards of education but have met with varying degrees of resistance from academics and practitioners, who in their turn have grown increasingly bitter at the way in which their experience has been disregarded. Ministers are left isolated on the bridge of the supertanker, without recourse to a navigator in whose expertise they can trust or a crew on whose willingness to follow orders they can rely. If Ministers introduce new curricula (like Computing, for example), teachers respond that they do not have the capability to teach it. If Ministers introduce more demanding exams, teachers complain that they are narrowing the curriculum and demotivating children who show little chance of success at the more demanding levels. If Ministers circumvent the influence of university education departments (for example, by moving teacher training to schools), they only succeed in replacing practice based on unjustified theory with practice based on no theory at all. If Ministers try to encourage innovation by introducing greater autonomy for teachers and more competition between schools, they run up against the lack of academic ambition among a large proportion of British parents, the hostility of many teachers to the sort of academic standards to which Ministers commonly aspire, and the natural inflexibility of a market in school places. If Ministers set up a quango (like Becta or Ofsted), they find that their actions are dictated by the orthodox views current among the teaching profession, from which the greater part of the quangos’ staff is drawn. If Ministers commission a report from those who are widely regarded as experts in the field (such as the recent ETAG or Assessment without Levels reports), they wait for 6 months or a year before they are delivered a set of recommendations based on shallow analysis and unjustified assertions.

The stand-off between impotent Ministers and disaffected and poorly trained teachers has not provided a good basis for sensible reform. Professor Paul Black, Chair of the Task Group on Assessment and Testing (TGAT), which advised the government on the 1988 Education Act, warned in a speech in 1992 of the consequences of a fundamental breakdown of trust between Ministers and educationalists (The Shifting Scenery of the National Curriculum, in Education Answers Back, Critical Responses to Government Policy):

It may be that the bulk of that opinion and expertise is deeply in error…[but] if it is true that the judgements and experience of the entire “educational establishment” have to be dismissed, then we really are in profound trouble.

Twenty-four years later, it seems to me to be hard to argue that Professor Black was not only right, but prescient. It is not that the entire education establishment is in error, but that its judgements are sufficiently inconsistent to speak with a single voice, or that its position (when such a single position can be determined) is too often unjustified by reliable research evidence. Increasingly, the voice of the educational establishment achieves real coherence only in respect of its determination to resist Ministerial demands.

In this context, the new computer tests may have a particular significance. They may offer an opportunity, not just to improve children’s numeracy: they may also offer the prospect of a new policy instrument to drive through radical and positive reform of our education system as a whole.

A new policy instrument

The logic of this statement may not be immediately obvious. After all, there is nothing new in the introduction of a new test—and who cares whether yet another new test is to be delivered by computer or on paper?

As noted above, the previous introduction of new tests has often suffered from one or more of the following problems:

  • the tests have been too difficult, attracting criticism as being unfair to less academic students or demotivating those that fail;
  • if externally marked, the tests have been expensive to administer, due to the cost of human markers;
  • to keep costs down, the tests have been infrequently sat (once per key stage);
  • this single sampling exercise has reduced the reliability of the tests (with at least 20% of levels being incorrectly allocated in SATs exams);
  • to compensate for their lack of reliability, the tests have been made increasingly formulaic, a trend which has in turn been criticized for narrowing the curriculum;
  • the relative infrequency of the tests (once every three years) has further increased the stakes of the tests, increasing the stress experienced by students and teachers, with further damage to reliability and teacher morale.

Why should computer tests be any different?

The use of computers to deliver the tests will help to address the second bullet (cost), while in combination with the relatively mechanical nature of the learning objective, it will also help address bullet 4 (reliability). Bullet 5 is not relevant in this case: the test cannot be criticised for being mechanical when so too is the learning objective. Bullets 2 and 6 (the high-stakes nature of infrequent testing) looks as if they will continue to represent valid concerns, given that the new test is to be sat only once, at the end of KS2.

This leaves the first point, the counter-productive consequences of high failure rates. As argued in Set up to fail (above), there is no reason to expect high failure rates if students are well prepared for the learning objective. But as argued in Trusting teachers (above), there may be good reasons not to trust many teachers to prepare students adequately for the new test, or to be concerned about the workload implications of burdening with a new requirement, teachers who are already stressed to their limit.

There is no evidence yet that Ministers have an answer to this problem—but there is a solution to hand, if they wish to pick it up.

As previously argued on this blog, there is a close relationship between practice (often undervalued by teachers as a pedagogy) and assessment. If you want to prepare students for the test, all they have to do, at least as a minimum, is to undertake regular activity of a similar type to that which is required by the test.

“Teaching to the test!” you will cry. And I agree that such test-focused drilling represents a minimum: it would be best if students were to re-enforce the knowledge that is drilled into them by applying it to a variety of more complex and applied contexts. But in the case of basic multiplication, this is what should be expected of any half-decent maths class. The element which we suspect is missing, which should be sufficient for passing the test, and which does not represent any distraction from productive learning, is the initial rote learning. The basic acquisition of factual knowledge can be achieved by the use of computerised drilling software just as effectively as it can be assessed by computerised assessment software.

The software required to deliver the tests will be similar to the software that will be required to prepare for the tests—where “preparing for the tests” will be the something pretty similar to mastering the learning objectives that are being assessed. Learning objective, assessment and pedagogy will all be closely aligned.

“Similar” but not necessarily identical. While assessment software has only to be reliable and valid (both relative simple requirements in this context), the drilling software has to be effective—which is more difficult to achieve. Good software might automatically timetable practice and review to ensure that knowledge is re-enforced. It might “chunk” the operations to be learnt into short bite-sized sections, randomizing from a larger pool of question items as the student becomes more familiar with them. It might facilitate quick-fire practice at the back of the school bus by supporting oral drilling with voice-recognition capability, using a variety of different visual prompts at other times. It might motivate the student by keeping records of progress and reporting these to the teacher. Compared with this wish-list of requirements, much of the software currently on the market is laborious and slow, often encumbered with unproductive animations and irrelevant game-play.

While there is reason, at least at first, for the government to fund directly the development of the assessment software, the corresponding teaching software should be left to the market to produce—a market for which (if Ministers play their cards right) the new assessments will provide a powerful stimulus.

Given the close alignment of pedagogy, assessment and curriculum in this case, it is in their potential to provide such a stimulus to a new market for teaching software that the computerised assessments might be so much more effective, as a policy instrument, than other sorts of high-stakes assessment have proved to be in the past.

When testing is not enough

So far, so optimistic. Maybe too optimistic—because there is little reason to hope that the computerised tests will be enough, in themselves, to stimulate the sort of market that I have envisaged. Either the tests will be too easy or people will complain that they are too difficult—and in either case, there is a danger that without proactive measures to improve teaching, the actual mastery of multiplication tables across the country will remain little changed.

The DfE needs to do go further than the simple introduction of an online test.

First, I would offer every school in the country a voucher, along the lines of the old e-Learning Credits (eLCs), with which it could purchase for its students third-party software that had been certified as being suitable for drilling students in their times-tables. How much would this cost?

  • Taking an average of about 600,000 children taking KS2 SATs each year,
  • assuming a total of about 600,000 x 4 = 2.5 million children in KS2 as a whole,
  • and supposing that a good times-table app could be bought for no more than £2 per student per year…
  • such a market could be pump-primed very adequately with a grant of£1 per KS2 student per year…
  • …or a total of £2.5 million per annum (a modest sum compared to the inflationary £100 million per annum provided in the original e-learning credits).

Second, I would create an online catalogue, which would list certified products and host user reviews, submitted by properly authenticated teachers.

Third, I would make it a condition of certification and inclusion on the catalogue that software could pass results to third-party management systems, using an agreed protocol based on the ADL’s Experience API. This would allow classroom teachers and school leaders to track the progress of their students.

Fourth, I would use government money to develop and give away to all schools a free but simple version of such a management system. These systems would harvest student performance data and make it visible to appropriately authorized teachers.

Fifth, I would develop a regulatory framework to cover the use of student data for this purpose, ensuring that students’ right of privacy was protected at the same time as allowing for its legitimate use the context of students’ fiduciary relationships with their teachers.

A policy with legs

The costs of this programme would be modest, certainly relative to the potential gains, which could well be stunning. Not only would the basic numeracy of British school children be improved beyond all recognition—but even more significantly, the government would find that it had acquired a very effective policy instrument with the potential to improve educational standards across the board. Not longer would it be constrained to introduce new tests that depended on overworked, disaffected and often hostile teachers to respond to these new requirements—they could provide teachers (and even directly to students) the means to meet those requirements.

Once the principle of well designed instructional software had been established, it will be found that there are plenty of opportunities to develop other kinds of instructional software, all of which can be integrated with same management system, using the same open protocol for data interoperability. Some types of instructional software could be given targeted government assistance, in the same way as the times-tables software; but other types of instructional software would self-launch, as the management system infrastructure became established and teachers became more aware of the potential that well designed instructional software can deliver.

More complex activities can be modelled by appropriate software, often requiring the digital activity to be blended with traditional classroom teaching. These new forms of instructional software will address the need for balance between simple rote learning and more complex, less formulaic forms of learning activity.

Teachers will start to see the potential benefits of more advanced management functionality (sequencing, assignment, analytics etc) and a market in effective management systems would also be born. Having established open standards for data interoperability, the government would have created a new market which could not be controlled by any dominant commercial interest.

The accumulation of data from a range of instructional software in the common management systems will facilitate analytics and effective research into pedagogy, of a kind that education needs desperately, and which Ministers (if they are right in advocating traditional forms of teaching) should welcome as having the potential to justify their arguments.

The collection of data will also support teachers’ ability to discriminate between more effective and less effective forms of instructional software, and more effective and less effective instructional techniques.

This continuous collection of data, not just from the formal assessment but also from the instructional software, will open up new opportunities to combine formative and summative assessment. It may be that in time, as data is collected continuously from formative activities, the need for the high-stakes assessments would wither away, along with all the distortions that are inevitably associated with nervous children and unreliable single-pass, snapshot sampling.

And when teachers found that the new technology could increase student attainment and motivation, and reduce their own workload all at the same time, it would be an instrument of government policy that they might even start to support enthusiastically. The perceived dichotomy between teaching and testing, and the corresponding mistrust between teachers and government, could at last start to fade away. Now wouldn’t that be a change that we could all celebrate?

2 thoughts on “Two twos are four

  1. My personal response was something akin to, ‘meh…’

    In edtech terms, I think I’ve fallen asleep and woken up in 1995. I know we’ve had our e-learning setbacks, but I really thought we’d be in immersive simulation territory by now!

    In pedagogical terms, it’s slightly wrong-headed. How much more useful it would have been if they had introduced the test from y4 onwards to be repeated with variations until the pupils get 100%! That way it would have served the supposed, actual purpose of ensuring mastery of the multiplication facts to 12 x 12, without the snide censuring of teachers for failing to instill them in their pupils it by the age of 11.

    • Juliet, I agree with you on where we need to get to. I guess I am pleased just to have government thinking in terms of technology as a means of improving education. My hope is that, once they are thinking in these terms, their thinking might naturally along the path we both want:

      1. There is not much point in assessing something if you do not have the means to teach it consistently.

      2. If you are teaching it in a form that allows for the continuous and reliable collection of data, the need for terminal summative assessment will then wither away.

      Thanks as ever for the comment. Crispin.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s