Baroness Greenfield recently wrote an opinion piece in the TES, restating her view that education technology is not just ineffective but may well be positively harmful. “More pseudo-science poppycock”, harrumphed one prominent ed-tech tweeter, who was quickly supported by others. “Actually, she makes some rather sensible points”, said I. “No, no”, said my interlocutors, “the Baroness has been completely discredited. But if you are going to blog about it, please keep it short”. “1,000 emollient words”, I promised.
I am not sure how well I managed to be emollient—I am afraid it is not a style that comes naturally to me—and I certainly failed to keep it short. But, if you are interested in ed-tech, then I think its intersection with emerging neuroscience, and the controversy that has blown up in this area, are worthy of careful consideration.
Dangers of a life online
The loss of human relationships
My interlocutors thought that Greenfield had really lost the plot when she said:If you use computers heavily, guess what? You are going to turn yourself effectively into a computer.
You may question whether this is a great figure of speech—but figure of speech it clearly is: no-one was supposed to take it literally. Greenfield’s underlying point is that “The brain adapts exquisitely to the environment” and that humans are not driven to learn by an internal roadmap (as constructivism has it), nor by an innate love of learning, nor even by a behaviourist/utilitarian imperative to avoid pain and maximise pleasure. Like most of the higher mammals, we are born mimics. This is why role modelling is so important in human development and why it is at least plausible to suggest that young people who spend large amounts of time reacting to computer-generated stimuli at the expense of human relationships may suffer educationally.
The empirical data seems to back up Greenfield’s argument. MOOCs (Massive Open Online Courses) promise education delivered by technology at massive scale but without a personal relationship with a human expert. Wherever they have been implemented in the context of formal learning, their performance has been woeful (see my 2012 prediction that this would be the case and my more recent contributions, made with the benefit of more knowledge of actual MOOC outcomes, at Online Educa Berlin 13 and Learning Technologies London 14).
Privileging knowledge over understanding
Susan Greenfield is right to claim that ed-tech focusing on “independent learning” has over-emphasised the acquisition of information online (often over-sold as “internet research”). This has failed to recognise that:true knowledge is how you use those facts, relate them to each other and put them together in a framework.
This is a point that I have made many times before on this blog, citing similar criticisms of DIY online learning by Diana Laurillard and Ofsted, and citing Eric Mazur and Bloom in support of Susan Greenfield’s version of true understanding.
The sort of understanding that Greenfield is talking about is acquired by interacting with authoritative “others” and although these others may in the future be provided in part by virtual tutors, simulation environments and structured interactions with peers, we have not yet been able to find an authoritative other that betters the dedicated, expert human teacher.
Creativity and imagination
There is a long-standing argument that computers enable creativity. This argument formed the basis of Seymour Papert’s argument in Mindstorms, it has been made more recently by Professor Stephen Heppell and Lord Puttnam, by NAACE under the Chairmanship of Miles Berry and by movements like Apps for Good for which Leon Cych is an advocate. It is a good argument in theory but it bears an important caveat: children need to acquire a certain degree of capability before they can be creative in any useful sense. This is the point that lies behind Bloom’s taxonomy, which proposes an instructional progression from memorisation of factual information through the application of that information, to true creativity. It was a point eloquently restated recently by Michael Gove on Question Time (search for “Question Time” if following the link). Open-ended creative projects (in which computers may well play an enabling role) need to be used as part of a sequence of instructional activities and not as a mono-pedagogy. For all its undoubted appeal, Papert’s LOGO never really worked.
While computers might have the potential to increase creativity, if you look at what is actually happening in the lives of most young people, it seems to me to be perfectly plausible to argue—as does Greenfield—that computers are tending to reduce creativity, turning us into consumers of pre-packaged entertainment. Even social networking (the technological basis of much theorising about Web 2.0 and the Wisdom of the Crowd) seems to me to encourage young people to become purveyors of fashionable opinion rather than independent, critical thinkers.
This does not mean that this is a simple either/or argument. It does mean that Susan Greenfield should at least be taken seriously when she warns that children are:constantly being cued by someone else’s second-hand imagination on the screen.
The lack of evidence for ed-tech
Over the last few months I have been challenging the ed-tech experts on the FELTAG and ETAG advisory groups to acknowledge that, although large sums of money have been spent on education technology over many years, there is no robust evidence that it has made any significant contribution to improving teaching and learning. As Susan Greenfield quotes NESTA’s Decoding Learning report,so far there has been little evidence of substantial success in [education technology] improving educational outcomes.
This is a fact that the proponents of ed-tech continue to ignore. Some say that it is not possible to produce evidence of improvement (of course it is); others say that we need to think of technology changing not only the means of education but also the purpose of education (by which means they appoint themselves judge and jury in their own cause); yet others say that we should introduce more technology regardless of these arguments, just to ensure that education authentically mirrors the rest of society (see my recent post, Because it’s there). Until the proponents of ed-tech face up to their own evidential black hole, it is they, not the likes of Susan Greenfield, who lack credibility.
The neurological basis for Greenfield’s views
Summary of Greenfield’s position
Greenfield’s argument about ed-tech rests on her views about neurology, in which she is a specialist. She argues that the brain is highly plastic, developing new neural pathways in response to its environment. If young people spend their time in an online environment that is not conducive to deep thought, this theoretical background explains how such a situation might contribute to a stunted development.
The riposte from Chabris and Simons
José Picardo, one of my interlocutors on Twitter, questions this theoretical background, quotingleading neuropsychologists Christopher Chabris and Daniel Simons [who argue that] the brain’s wiring “is determined by genetic programs and biochemical interactions that do most of their work long before a child discovers Facebook and Twitter”.
Chabris and Simons are two Professors of Psychology who co-authored a popular book based on an experiment featuring a video of a man in a gorilla suit. Perfectly adapted to winning recognition in the internet age, this research earned them Ig Nobel (a prize described by Wikipedia as “a parody of the Nobel Prizes for ten unusual or trivial achievements in Scientific Research”). The fact that Chabris and Simons are successful popular science writers does not mean that they are not also do a day-job as serious scientists—but neither in itself does it qualify them for the epithet “leading neuropsychologists”. If anyone deserves that title, it seems to me to be Susan Greenfield, who has been awarded 30 Honorary Degrees from British and foreign universities, heads a multi-displinary research group exploring Alzheimer’s and Parkinson’s, is Senior Research Fellow at Lincoln College, Oxford, published a neuroscientific theory of consciousness, received the Michael Faraday Medal from the Royal Society, a non-political Life Peerage, Honorary Fellowships of the Royal College of Physicians and of the Royal Society, the French Legion d’Honneur, and was included in Debretts 500 most influential and inspiring people in Britain for 2014 (see Susan Greenfield’s homepage). I don’t myself favour making arguments from authority, but I should have thought you would want to be pretty sure of your ground before accusing such a person of speaking “pseudo-science poppycock” (a phrase used by Bob Harrison, not José Picardo).
As for the quality of the article by Chabris and Simons that José quotes, all I can say is that it doesn’t do it for me. Having given their opponents the pejorative label “digital alarmists”, they assert that the basic wiring of the brain is established at birth. As a result of what we do thereafterwe will no more lose our ability to pay attention than we will lose our ability to listen, see or speak.
The trouble with this argument is that no-one has claimed that children were losing their ability to pay attention. They just said that people’s ability to pay attention was declining: it is a question of degree not total loss. If Chabris and Simons were to respond to the point actually being made by the so-called “alarmists”, they would need to say that people’s abilities do not change incrementally depending on what they do (like practise the piano, for example). We would surely all reject such a postion as absurd.
Chabris and Simons continue to say that since the inception of the computer age, modern chess-mastersuse laptops to review hundreds of games in rapid succession, in effect “downloading” into their minds knowledge that is customized for their next opponent. They access the knowledge as they need it, discarding it after the match, and the result is that today’s grandmasters play the game better than their predecessors did.
This is a very anecdotal sort of evidence (Chabris is himself a keen chess player). They do not point to any research which explains the nature of the causal link between pre-match preparation and quality of play; and they ignore completely the importance for a chess master of durable, internalized expertise, which is essential both to play the game and to make sense of the downloaded information. In highly competitive environments, the difference between victory and defeat may represent a very small margin of superiority over your opponent—maybe a hundredth of a second in a 10 second sprint. In their final preparation, different competitors may do different things to make that crucial 0.1% difference. Some chess masters might choose to browse through past games, others might prefer to go for a quiet walk; but this does not mean that either form of pre-match preparation is sufficient to make me a chess master. Chabris and Simons fall straight into the fallacy which confuses knowledge with understanding. Expertise cannot so easily be outsourced to the internet.
Hattie’s argument against digital natives
José also references John Hattie, the educational researcher, from whom the references to Chabris and Simons originated. In his most recent book, Hattie dismisses those who believe that the internet is changing the way we think, either for better or worse.
His first target is people who think the internet is changing our cognition for the better. Marc Prensky’s 2001 “digital native” thesis suggested that we should adapt our educational methods and objectives to match the supposed new cognitive skills of the internet generation. This is a position that has been widely discredited and when Hattie argues against Prensky, I am with him all the way.We find such notions [that computers can replace teachers or that students will develop electronically enhanced cognitive resources] unrealistic, unattainable, and fundamentally incorrect.
In getting to this no-nonsense conclusion, Hattie’s makes the point that there is a critical distinction between the accumulation of information gleaned from the internet and “genuine knowledge acquisition”. The words might just as well have been spoken by Susan Greenfield herself and represent an implicit criticism of the reasoning given by Chabris and Simons.
Hattie’s argument against “digital alarmists”
Even though Hattie’s position is incompatible with Chabris and Simons’ reasoning, he nevertheless accepts their conclusions. From the outset he labels those who think the internet is changing our cognition for the worse as “alarmists”. These include Sven Birkets in 1994; the “well-respected researcher in the area of children’s reading”, Maryanne Wolf in 2007; and Nicholas Carr whose book, The Shallows: What the Internet is doing to our brains, was a Pullitzer Prize finalist in 2010.
I find Hattie’s argument here rather underwhelming. When I read the sentence quoted by José:“the notion that Internet usage itself will occasion alterations or deterioration in cognitive capacities has no genuine support from within the known research literature”,
I cannot help thinking that:
- it is hard to prove a negative (which is perhaps why Hattie does not even try, preferring straightforward assertion);
- the body of research with which Hattie is familiar is about education and not neurology or associated sociological trends;
- the explosion of our use of the internet being an extremely recent phenomenon (Facebook being launched in 2004 and Twitter in 2006), would you not necessarily expect this to have shown up yet in conclusive, empirical research,
- expecially when the bulk of the research studied by Hattie was produced in the “80s, 90s and 2000s”, providing minimal overlap with the internet age;
- and that despite all the above, Hattie’s claim appears quite simply to be untrue.
A recent article in Guardian, Is technology and the internet reducing pupils’ attention spans? cites:
- a recent survey of 2,500 US teachers in which 87% of respondents thought that modern technologies were creating an “easily distracted generation with short attention spans”;
- Sue Honoré, who authored the 2009 report Generation Y: Inside out, reporting that children “who spend a lot of time alone using technology ‘tend to have less in the way of communication skills, self-awareness and emotional intelligence’”;
- a recent study by Dr Karina Linnell at Goldsmiths College into the Himba tribe in Namibia, which found that those who had moved into the town had much shorter attention spans than those who still lived traditional lives in the country;
- just yesterday I read an announcement of a further small scale study by the University of California, which suggests that children who spend a high proportion of their time online are not so good at reading human emotions as those who spent five days at a summer camp without mobiles;
- and when the supposed lack of evidence was put to Susan Greenfield on 3 August, she cited a Chinese research paper, Microstructure Abnormalities in Adolescents with Internet Addiction Disorder.
I do not claim that any of this research is likely to be conclusive—but it is certainly worth acknowledging.
What do we mean by “re-wiring the brain”?
Lest I be accused of selective quoting, I should say that Sue Honoré added that the reason for poor interpersonal skills wasnot because they don’t have the capabilities…but because…when they come into situations where they have to work with others, they appear not to concentrate on people.
This strikes me as a rather odd distinction to make—but noteworthy as it mirrors Chabris and Simon’s distinction between a capacity and skill:Of course, the brain changes any time we form a memory or learn a new skill, but new skills build on our existing capacities without fundamentally changing them.
Is it helpful to say that I have the capacity to make a table, even if I do not have the skill to do so? or that I have the capability to relate well to other people if I never actually do so? I think not—and I believe that this point lies at the heart of the problem with Chabris and Simon’s argument. They dispute the claim that our use of the internet will transform our brains in some sort of fundamental, transformative way. But all that Greenfield, Carr et al are saying is that our interaction with our environment encodes in our brain new patterns of behaviour as an unavoidable concomitant of learning.
While the non-expert reader is likely to think that “re-wiring our brains” sounds like the psychological equivalent of turning into a werewolf; in fact re-wiring is just what brains do all the time. It is a routine process that occurs every day of our lives. And, as I have pointed out frequently elsewhere on this blog, you can learn harmful behaviours just as easily (perhaps rather more easily) than beneficial ones.
This confusion between traumatic and mundane seems to be reflected in two different uses of the term “brain plasticity”. The first idiot’s guide that I find on Google suggests that the theory of brain plasticity has been widely accepted by modern researchers for a long time:Up until the 1960s, researchers believed…by early adulthood…the brain’s physical structure was permanent. Modern research has demonstrated that the brain continues to create new neural pathways and alter existing ones in order to adapt to new experiences, learn new information and create new memories.
This explains brain plasticity in terms of the encoding of new neural pathways. But it appears that the term has recently acquired a second meaning, following the discovery that after a major brain injury, the brain will not just lay down new neural pathways but will reorganise its fundamental structure. Maybe (I do not understand the exact details) you can grow a new frontal cortex behind your left ear.
When Chabris and Simons say that the wiring of the brain is initially down to genetics, they are referring to this fundamental structure of the brain and not to the creation of neural pathways. This point also explains the distinction they make between capability and skill. It is not that someone who has never played the piano has the capability to play the piano immediately—by definition they do not—but they do have the capability to learn to play the piano (if they don’t spend all their time playing computer games).
Chabris and Simon are tilting at windmills. Greenfield, Wolf, Carr et al do not claim that spending too much time online has the same effect on your brain as surviving a catastrophic car crash. They are just saying that it has an effect on what you learn and that what you learn affects the way that your brain gets wired up.
Hattie’s “balanced perspective”
When it comes to his own position, Hattie’s argument seems to me rather to miss the point. Having stated somewhat magisterially thata balanced perspective on these issues is most likely expressed in the following way: that human capabilities are not as malleable as certain theories imply.
he continues to make a rather bizarre argument about reading comprehension.It would be deemed non-sensical to try to conduct a controlled study to find out if reading the printed page resulted in more, or less, comprehension that reading the same page on a computer screen…Aspects such as font type, size, and italics basically are irrelevant to comprehension provided the text is genuinely legible, the reader is focused and comfortable, and has adequate vision for the task.
The problem is not that young people are trying to read Middlemarch on computer screens but finding the font size too small—the problem is that are not reading Middlemarch at all; not even having face-to-face conversations with people. Instead, they are watching short videos, playing games and chatting with each other using inconsequential and disembodied textese. However, if we were interested in the relative comprehensibility of text presented on screen and on paper, I do not see that there would be anything non-sensical at all about conducting a controlled study into this issue.
Finally, I am not sure that Hattie’s argument on this point is consistent with his more general position, both in respect of his views, already stated, on understanding, or when in the interview that he gave to the BBC Educators series on 20 August 2014, he said the following about about watching television at home:Unfortunately it has a negative effect. And the problem is that people at home who watch a lot of television haven’t learned to how to do other things related to reading, listening, interacting with others. So it’s a missed opportunity if you watch too much television.
Going online is not necessarily the same as watching television, of course. But it the two cases share a common principle that, while there might be nothing harmful about doing one particular thing, it may still be harmful if children are spending time doing one thing to the exclusion of doing other things. The development of children reflects the influence of the sum-total of their environment. And that is precisely the argument that Susan Greenfield and her like are making.
Evaluating Greenfield’s contribution to ed-tech
My criticism of Greenfield’s TES conclusion
On reading her TES article, and having agreed with Greenfield over much of what she said about ed-tech, my initial reaction to her conclusion was to criticise her understanding of the scope of technology in education, which she seems to understand (as do her opponents) in terms of of the sort of generic digital technology that already exists. Observing that such technology tends to undermine traditional teaching, she then assumes that the only way to improve teaching is to put the computers away and hire better teachers. I disagree.
While New Labour spent a lot of extra money on education, not all of it went on technology. Much of it went on raising teacher pay (what Greenfield advocates)—and this was no more effective at raising standards of learning that was the investment in technology.
I believe that both pro- and anti-ed-tech camps fail sufficiently to acknowledge that teaching is itself a technology—albeit an underdeveloped and haphazardly implemented one. When deployed in their raw form, generic internet technologies have only managed to support independent learning, thereby undermining the central role of the expert teacher. Those of us who question the effectiveness of independent learning look to a new layer of education-specific technology, which will deliver targeted and purposeful activity through a managed process of assignment, monitoring, criticism and progression.
The fact that the aimless application of technology to education that we have seen over the last ten years has been at least ineffective, if not positively harmful, does not mean that technology cannot be applied to education in new, more thoughtful ways. It does not mean that technology cannot support teaching as traditionally understood, rather than being used to subvert it. Nor does it mean that the development of such technologies does not represent a very good way to tackle the chronic under-performance of our education system. Our aspiration to improve the quality of teaching is intrinsically a technological project.
Greenfield’s general attitude to technology
This is in fact precisely what Greenfield herself advocates, when talking in a general context, as she does at the tend of the interview she gave to BBC Booktalk on the 21 August:We might say, “you know what, we want to take things into our own hands. Instead of just sleep-walking into this, these are just computers, can’t we harness them to deliver something very exciting and beneficial to humanity rather than just assuming that it is all automatically wonderful?”
A search for Greenfield on Twitter turns up the following quotation from Greenfield by Edinburgh Book Fest (@edbookfest):My concern is where digital technology stops being a means to an end but becomes an end in itself.
That, in general terms, is exactly the same argument that I am trying to put on this blog with regards to ed-tech. It is the assumption that technology is intrinsically and automatically wonderful that is the enemy of its intelligent application—and that is why, after decades of advocacy and the expense of billions of pounds of tax-payer’s money, ed-tech still has not been successfully used to improve teaching and learning as traditionally understood.
The onus of proof
If Susan Greenfield were saying that children should be banned from going on Facebook and other social networks, she would clearly need to have produced a much higher level of evidence to support her case than she has done. But she isn’t saying this—she is saying is that we should be very careful before these social networking tools are rolled out, at the taxpayer’s expense, to play a routine part in our education system. To justify such a precautionary approach, what we should expect from the likes of Susan Greenfield is not proof but plausibility. As far as I can see, she has met that criterion of plausibility very comfortably:
- the theory of brain plasticity is widely accepted by neurologists—and does not just refer to the fundamental reorganisation of the brain in response to traumatic injury but to the routine encoding of new experiences and patterns of behaviour;
- preliminary evidence suggests that there may well be a decline in people’s attention span associated with the rapid-paced, high-stimulus environment of modern media;
- there is a lack of evidence (in spite of much searching) that the internet has in itself improved learning outcomes (as traditionally measured) in our schools.
Just as in a criminal trial the onus of proof lies on the prosecution not the defense, so those who disagree with Susan Greenfield must realise that if they want to roll out Facebook and Twitter to our schools, then the onus of proof lies on them, not on those urging caution.
Generic internet technologies have been used for educational purposes for as long as they have existed. The evidence of positive outcomes from these trials is so small as to be lost in the noise. This fact, stated clearly by NESTA’s Decoding Learning and the EEF’s Impact of Digital Technology on Learning, is plain for all who have eyes to see. While it is understandable that advocates of generic technology enhanced learning should kick back against a perceived criticism of their beloved online world, they need to realise that the position that they are defending has already been lost. In these circumstances, if they are willing to reconsider the evidence and approach the debate with an open mind, they may find in Greenfield (and indeed myself) allies and not opponents.
Neither I nor Greenfield are arguing that the modern internet is necessarily harmful. For myself, I am enthusiastic about the opportunity it presents to education to access information and to meet people across the world, so long as we can help young people use those opportunities productively. All that we are arguing is that, like almost any new technology, digital technology can either be used beneficially or harmfully and we therefore need to keep our eyes open with respect to what these potential harms might be, and stop dismissing as irrelevant the lack of evidence of benefit.
Above all, we must abandon the position that says that the introduction of technology into education is inevitable or that it is intrinsically good. We must start to examine the way that we introduce technology into our schools, evaluating its effectiveness against its ability to help us achieve agreed objectives. Only then will we produce a version of ed-tech that works; and only then will we have understood what technology truly is: the means by which we achieve our ends.
I should add that this little controversy by no means exhausts the potential intersection between education technology and neurology. Another important issue is the concept of digital load and the relationship between short-term and long-term memory. But that is a topic for another day.