The ed-tech community should listen carefully to concerns being raised about the effect on our children’s development of excessive time spent online
Baroness Greenfield recently wrote an opinion piece in the TES, restating her view that education technology is not just ineffective but may well be positively harmful. “More pseudo-science poppycock”, harrumphed one prominent ed-tech tweeter, who was quickly supported by others. “Actually, she makes some rather sensible points”, said I. “No, no”, said my interlocutors, “the Baroness has been completely discredited. But if you are going to blog about it, please keep it short”. “1,000 emollient words”, I promised.
I am not sure how well I managed to be emollient—I am afraid it is not a style that comes naturally to me—and I certainly failed to keep it short. But, if you are interested in ed-tech, then I think its intersection with emerging neuroscience, and the controversy that has blown up in this area, are worthy of careful consideration.
Dangers of a life online
The loss of human relationships
My interlocutors thought that Greenfield had really lost the plot when she said:If you use computers heavily, guess what? You are going to turn yourself effectively into a computer.
You may question whether this is a great figure of speech—but figure of speech it clearly is: no-one was supposed to take it literally. Greenfield’s underlying point is that “The brain adapts exquisitely to the environment” and that humans are not driven to learn by an internal roadmap (as constructivism has it), nor by an innate love of learning, nor even by a behaviourist/utilitarian imperative to avoid pain and maximise pleasure. Like most of the higher mammals, we are born mimics. This is why role modelling is so important in human development and why it is at least plausible to suggest that young people who spend large amounts of time reacting to computer-generated stimuli at the expense of human relationships may suffer educationally.
The empirical data seems to back up Greenfield’s argument. MOOCs (Massive Open Online Courses) promise education delivered by technology at massive scale but without a personal relationship with a human expert. Wherever they have been implemented in the context of formal learning, their performance has been woeful (see my 2012 prediction that this would be the case and my more recent contributions, made with the benefit of more knowledge of actual MOOC outcomes, at Online Educa Berlin 13 and Learning Technologies London 14).
Privileging knowledge over understanding
Susan Greenfield is right to claim that ed-tech focusing on “independent learning” has over-emphasised the acquisition of information online (often over-sold as “internet research”). This has failed to recognise that:true knowledge is how you use those facts, relate them to each other and put them together in a framework.
This is a point that I have made many times before on this blog, citing similar criticisms of DIY online learning by Diana Laurillard and Ofsted, and citing Eric Mazur and Bloom in support of Susan Greenfield’s version of true understanding.
The sort of understanding that Greenfield is talking about is acquired by interacting with authoritative “others” and although these others may in the future be provided in part by virtual tutors, simulation environments and structured interactions with peers, we have not yet been able to find an authoritative other that betters the dedicated, expert human teacher.
Creativity and imagination
There is a long-standing argument that computers enable creativity. This argument formed the basis of Seymour Papert’s argument in Mindstorms, it has been made more recently by Professor Stephen Heppell and Lord Puttnam, by NAACE under the Chairmanship of Miles Berry and by movements like Apps for Good for which Leon Cych is an advocate. It is a good argument in theory but it bears an important caveat: children need to acquire a certain degree of capability before they can be creative in any useful sense. This is the point that lies behind Bloom’s taxonomy, which proposes an instructional progression from memorisation of factual information through the application of that information, to true creativity. It was a point eloquently restated recently by Michael Gove on Question Time (search for “Question Time” if following the link). Open-ended creative projects (in which computers may well play an enabling role) need to be used as part of a sequence of instructional activities and not as a mono-pedagogy. For all its undoubted appeal, Papert’s LOGO never really worked.
While computers might have the potential to increase creativity, if you look at what is actually happening in the lives of most young people, it seems to me to be perfectly plausible to argue—as does Greenfield—that computers are tending to reduce creativity, turning us into consumers of pre-packaged entertainment. Even social networking (the technological basis of much theorising about Web 2.0 and the Wisdom of the Crowd) seems to me to encourage young people to become purveyors of fashionable opinion rather than independent, critical thinkers.
This does not mean that this is a simple either/or argument. It does mean that Susan Greenfield should at least be taken seriously when she warns that children are:constantly being cued by someone else’s second-hand imagination on the screen.
The lack of evidence for ed-tech
Over the last few months I have been challenging the ed-tech experts on the FELTAG and ETAG advisory groups to acknowledge that, although large sums of money have been spent on education technology over many years, there is no robust evidence that it has made any significant contribution to improving teaching and learning. As Susan Greenfield quotes NESTA’s Decoding Learning report,so far there has been little evidence of substantial success in [education technology] improving educational outcomes.
This is a fact that the proponents of ed-tech continue to ignore. Some say that it is not possible to produce evidence of improvement (of course it is); others say that we need to think of technology changing not only the means of education but also the purpose of education (by which means they appoint themselves judge and jury in their own cause); yet others say that we should introduce more technology regardless of these arguments, just to ensure that education authentically mirrors the rest of society (see my recent post, Because it’s there). Until the proponents of ed-tech face up to their own evidential black hole, it is they, not the likes of Susan Greenfield, who lack credibility.
The neurological basis for Greenfield’s views
Summary of Greenfield’s position
Greenfield’s argument about ed-tech rests on her views about neurology, in which she is a specialist. She argues that the brain is highly plastic, developing new neural pathways in response to its environment. If young people spend their time in an online environment that is not conducive to deep thought, this theoretical background explains how such a situation might contribute to a stunted development.
The riposte from Chabris and Simons
José Picardo, one of my interlocutors on Twitter, questions this theoretical background, quotingleading neuropsychologists Christopher Chabris and Daniel Simons [who argue that] the brain’s wiring “is determined by genetic programs and biochemical interactions that do most of their work long before a child discovers Facebook and Twitter”.
Chabris and Simons are two Professors of Psychology who co-authored a popular book based on an experiment featuring a video of a man in a gorilla suit. Perfectly adapted to winning recognition in the internet age, this research earned them Ig Nobel (a prize described by Wikipedia as “a parody of the Nobel Prizes for ten unusual or trivial achievements in Scientific Research”). The fact that Chabris and Simons are successful popular science writers does not mean that they are not also do a day-job as serious scientists—but neither in itself does it qualify them for the epithet “leading neuropsychologists”. If anyone deserves that title, it seems to me to be Susan Greenfield, who has been awarded 30 Honorary Degrees from British and foreign universities, heads a multi-displinary research group exploring Alzheimer’s and Parkinson’s, is Senior Research Fellow at Lincoln College, Oxford, published a neuroscientific theory of consciousness, received the Michael Faraday Medal from the Royal Society, a non-political Life Peerage, Honorary Fellowships of the Royal College of Physicians and of the Royal Society, the French Legion d’Honneur, and was included in Debretts 500 most influential and inspiring people in Britain for 2014 (see Susan Greenfield’s homepage). I don’t myself favour making arguments from authority, but I should have thought you would want to be pretty sure of your ground before accusing such a person of speaking “pseudo-science poppycock” (a phrase used by Bob Harrison, not José Picardo).
As for the quality of the article by Chabris and Simons that José quotes, all I can say is that it doesn’t do it for me. Having given their opponents the pejorative label “digital alarmists”, they assert that the basic wiring of the brain is established at birth. As a result of what we do thereafterwe will no more lose our ability to pay attention than we will lose our ability to listen, see or speak.
The trouble with this argument is that no-one has claimed that children were losing their ability to pay attention. They just said that people’s ability to pay attention was declining: it is a question of degree not total loss. If Chabris and Simons were to respond to the point actually being made by the so-called “alarmists”, they would need to say that people’s abilities do not change incrementally depending on what they do (like practise the piano, for example). We would surely all reject such a postion as absurd.
Chabris and Simons continue to say that since the inception of the computer age, modern chess-mastersuse laptops to review hundreds of games in rapid succession, in effect “downloading” into their minds knowledge that is customized for their next opponent. They access the knowledge as they need it, discarding it after the match, and the result is that today’s grandmasters play the game better than their predecessors did.
This is a very anecdotal sort of evidence (Chabris is himself a keen chess player). They do not point to any research which explains the nature of the causal link between pre-match preparation and quality of play; and they ignore completely the importance for a chess master of durable, internalized expertise, which is essential both to play the game and to make sense of the downloaded information. In highly competitive environments, the difference between victory and defeat may represent a very small margin of superiority over your opponent—maybe a hundredth of a second in a 10 second sprint. In their final preparation, different competitors may do different things to make that crucial 0.1% difference. Some chess masters might choose to browse through past games, others might prefer to go for a quiet walk; but this does not mean that either form of pre-match preparation is sufficient to make me a chess master. Chabris and Simons fall straight into the fallacy which confuses knowledge with understanding. Expertise cannot so easily be outsourced to the internet.
Hattie’s argument against digital natives
José also references John Hattie, the educational researcher, from whom the references to Chabris and Simons originated. In his most recent book, Hattie dismisses those who believe that the internet is changing the way we think, either for better or worse.
His first target is people who think the internet is changing our cognition for the better. Marc Prensky’s 2001 “digital native” thesis suggested that we should adapt our educational methods and objectives to match the supposed new cognitive skills of the internet generation. This is a position that has been widely discredited and when Hattie argues against Prensky, I am with him all the way.We find such notions [that computers can replace teachers or that students will develop electronically enhanced cognitive resources] unrealistic, unattainable, and fundamentally incorrect.
In getting to this no-nonsense conclusion, Hattie’s makes the point that there is a critical distinction between the accumulation of information gleaned from the internet and “genuine knowledge acquisition”. The words might just as well have been spoken by Susan Greenfield herself and represent an implicit criticism of the reasoning given by Chabris and Simons.
Hattie’s argument against “digital alarmists”
Even though Hattie’s position is incompatible with Chabris and Simons’ reasoning, he nevertheless accepts their conclusions. From the outset he labels those who think the internet is changing our cognition for the worse as “alarmists”. These include Sven Birkets in 1994; the “well-respected researcher in the area of children’s reading”, Maryanne Wolf in 2007; and Nicholas Carr whose book, The Shallows: What the Internet is doing to our brains, was a Pullitzer Prize finalist in 2010.
I find Hattie’s argument here rather underwhelming. When I read the sentence quoted by José:“the notion that Internet usage itself will occasion alterations or deterioration in cognitive capacities has no genuine support from within the known research literature”,
I cannot help thinking that:
- it is hard to prove a negative (which is perhaps why Hattie does not even try, preferring straightforward assertion);
- the body of research with which Hattie is familiar is about education and not neurology or associated sociological trends;
- the explosion of our use of the internet being an extremely recent phenomenon (Facebook being launched in 2004 and Twitter in 2006), would you not necessarily expect this to have shown up yet in conclusive, empirical research,
- expecially when the bulk of the research studied by Hattie was produced in the “80s, 90s and 2000s”, providing minimal overlap with the internet age;
- and that despite all the above, Hattie’s claim appears quite simply to be untrue.
A recent article in Guardian, Is technology and the internet reducing pupils’ attention spans? cites:
- a recent survey of 2,500 US teachers in which 87% of respondents thought that modern technologies were creating an “easily distracted generation with short attention spans”;
- Sue Honoré, who authored the 2009 report Generation Y: Inside out, reporting that children “who spend a lot of time alone using technology ‘tend to have less in the way of communication skills, self-awareness and emotional intelligence’”;
- a recent study by Dr Karina Linnell at Goldsmiths College into the Himba tribe in Namibia, which found that those who had moved into the town had much shorter attention spans than those who still lived traditional lives in the country;
- just yesterday I read an announcement of a further small scale study by the University of California, which suggests that children who spend a high proportion of their time online are not so good at reading human emotions as those who spent five days at a summer camp without mobiles;
- and when the supposed lack of evidence was put to Susan Greenfield on 3 August, she cited a Chinese research paper, Microstructure Abnormalities in Adolescents with Internet Addiction Disorder.
I do not claim that any of this research is likely to be conclusive—but it is certainly worth acknowledging.
What do we mean by “re-wiring the brain”?
Lest I be accused of selective quoting, I should say that Sue Honoré added that the reason for poor interpersonal skills wasnot because they don’t have the capabilities…but because…when they come into situations where they have to work with others, they appear not to concentrate on people.
This strikes me as a rather odd distinction to make—but noteworthy as it mirrors Chabris and Simon’s distinction between a capacity and skill:Of course, the brain changes any time we form a memory or learn a new skill, but new skills build on our existing capacities without fundamentally changing them.
Is it helpful to say that I have the capacity to make a table, even if I do not have the skill to do so? or that I have the capability to relate well to other people if I never actually do so? I think not—and I believe that this point lies at the heart of the problem with Chabris and Simon’s argument. They dispute the claim that our use of the internet will transform our brains in some sort of fundamental, transformative way. But all that Greenfield, Carr et al are saying is that our interaction with our environment encodes in our brain new patterns of behaviour as an unavoidable concomitant of learning.
While the non-expert reader is likely to think that “re-wiring our brains” sounds like the psychological equivalent of turning into a werewolf; in fact re-wiring is just what brains do all the time. It is a routine process that occurs every day of our lives. And, as I have pointed out frequently elsewhere on this blog, you can learn harmful behaviours just as easily (perhaps rather more easily) than beneficial ones.
This confusion between traumatic and mundane seems to be reflected in two different uses of the term “brain plasticity”. The first idiot’s guide that I find on Google suggests that the theory of brain plasticity has been widely accepted by modern researchers for a long time:Up until the 1960s, researchers believed…by early adulthood…the brain’s physical structure was permanent. Modern research has demonstrated that the brain continues to create new neural pathways and alter existing ones in order to adapt to new experiences, learn new information and create new memories.
This explains brain plasticity in terms of the encoding of new neural pathways. But it appears that the term has recently acquired a second meaning, following the discovery that after a major brain injury, the brain will not just lay down new neural pathways but will reorganise its fundamental structure. Maybe (I do not understand the exact details) you can grow a new frontal cortex behind your left ear.
When Chabris and Simons say that the wiring of the brain is initially down to genetics, they are referring to this fundamental structure of the brain and not to the creation of neural pathways. This point also explains the distinction they make between capability and skill. It is not that someone who has never played the piano has the capability to play the piano immediately—by definition they do not—but they do have the capability to learn to play the piano (if they don’t spend all their time playing computer games).
Chabris and Simon are tilting at windmills. Greenfield, Wolf, Carr et al do not claim that spending too much time online has the same effect on your brain as surviving a catastrophic car crash. They are just saying that it has an effect on what you learn and that what you learn affects the way that your brain gets wired up.
Hattie’s “balanced perspective”
When it comes to his own position, Hattie’s argument seems to me rather to miss the point. Having stated somewhat magisterially thata balanced perspective on these issues is most likely expressed in the following way: that human capabilities are not as malleable as certain theories imply.
he continues to make a rather bizarre argument about reading comprehension.It would be deemed non-sensical to try to conduct a controlled study to find out if reading the printed page resulted in more, or less, comprehension that reading the same page on a computer screen…Aspects such as font type, size, and italics basically are irrelevant to comprehension provided the text is genuinely legible, the reader is focused and comfortable, and has adequate vision for the task.
The problem is not that young people are trying to read Middlemarch on computer screens but finding the font size too small—the problem is that are not reading Middlemarch at all; not even having face-to-face conversations with people. Instead, they are watching short videos, playing games and chatting with each other using inconsequential and disembodied textese. However, if we were interested in the relative comprehensibility of text presented on screen and on paper, I do not see that there would be anything non-sensical at all about conducting a controlled study into this issue.
Finally, I am not sure that Hattie’s argument on this point is consistent with his more general position, both in respect of his views, already stated, on understanding, or when in the interview that he gave to the BBC Educators series on 20 August 2014, he said the following about about watching television at home:Unfortunately it has a negative effect. And the problem is that people at home who watch a lot of television haven’t learned to how to do other things related to reading, listening, interacting with others. So it’s a missed opportunity if you watch too much television.
Going online is not necessarily the same as watching television, of course. But it the two cases share a common principle that, while there might be nothing harmful about doing one particular thing, it may still be harmful if children are spending time doing one thing to the exclusion of doing other things. The development of children reflects the influence of the sum-total of their environment. And that is precisely the argument that Susan Greenfield and her like are making.
Evaluating Greenfield’s contribution to ed-tech
My criticism of Greenfield’s TES conclusion
On reading her TES article, and having agreed with Greenfield over much of what she said about ed-tech, my initial reaction to her conclusion was to criticise her understanding of the scope of technology in education, which she seems to understand (as do her opponents) in terms of of the sort of generic digital technology that already exists. Observing that such technology tends to undermine traditional teaching, she then assumes that the only way to improve teaching is to put the computers away and hire better teachers. I disagree.
While New Labour spent a lot of extra money on education, not all of it went on technology. Much of it went on raising teacher pay (what Greenfield advocates)—and this was no more effective at raising standards of learning that was the investment in technology.
I believe that both pro- and anti-ed-tech camps fail sufficiently to acknowledge that teaching is itself a technology—albeit an underdeveloped and haphazardly implemented one. When deployed in their raw form, generic internet technologies have only managed to support independent learning, thereby undermining the central role of the expert teacher. Those of us who question the effectiveness of independent learning look to a new layer of education-specific technology, which will deliver targeted and purposeful activity through a managed process of assignment, monitoring, criticism and progression.
The fact that the aimless application of technology to education that we have seen over the last ten years has been at least ineffective, if not positively harmful, does not mean that technology cannot be applied to education in new, more thoughtful ways. It does not mean that technology cannot support teaching as traditionally understood, rather than being used to subvert it. Nor does it mean that the development of such technologies does not represent a very good way to tackle the chronic under-performance of our education system. Our aspiration to improve the quality of teaching is intrinsically a technological project.
Greenfield’s general attitude to technology
This is in fact precisely what Greenfield herself advocates, when talking in a general context, as she does at the tend of the interview she gave to BBC Booktalk on the 21 August:We might say, “you know what, we want to take things into our own hands. Instead of just sleep-walking into this, these are just computers, can’t we harness them to deliver something very exciting and beneficial to humanity rather than just assuming that it is all automatically wonderful?”
A search for Greenfield on Twitter turns up the following quotation from Greenfield by Edinburgh Book Fest (@edbookfest):My concern is where digital technology stops being a means to an end but becomes an end in itself.
That, in general terms, is exactly the same argument that I am trying to put on this blog with regards to ed-tech. It is the assumption that technology is intrinsically and automatically wonderful that is the enemy of its intelligent application—and that is why, after decades of advocacy and the expense of billions of pounds of tax-payer’s money, ed-tech still has not been successfully used to improve teaching and learning as traditionally understood.
The onus of proof
If Susan Greenfield were saying that children should be banned from going on Facebook and other social networks, she would clearly need to have produced a much higher level of evidence to support her case than she has done. But she isn’t saying this—she is saying is that we should be very careful before these social networking tools are rolled out, at the taxpayer’s expense, to play a routine part in our education system. To justify such a precautionary approach, what we should expect from the likes of Susan Greenfield is not proof but plausibility. As far as I can see, she has met that criterion of plausibility very comfortably:
- the theory of brain plasticity is widely accepted by neurologists—and does not just refer to the fundamental reorganisation of the brain in response to traumatic injury but to the routine encoding of new experiences and patterns of behaviour;
- preliminary evidence suggests that there may well be a decline in people’s attention span associated with the rapid-paced, high-stimulus environment of modern media;
- there is a lack of evidence (in spite of much searching) that the internet has in itself improved learning outcomes (as traditionally measured) in our schools.
Just as in a criminal trial the onus of proof lies on the prosecution not the defense, so those who disagree with Susan Greenfield must realise that if they want to roll out Facebook and Twitter to our schools, then the onus of proof lies on them, not on those urging caution.
Generic internet technologies have been used for educational purposes for as long as they have existed. The evidence of positive outcomes from these trials is so small as to be lost in the noise. This fact, stated clearly by NESTA’s Decoding Learning and the EEF’s Impact of Digital Technology on Learning, is plain for all who have eyes to see. While it is understandable that advocates of generic technology enhanced learning should kick back against a perceived criticism of their beloved online world, they need to realise that the position that they are defending has already been lost. In these circumstances, if they are willing to reconsider the evidence and approach the debate with an open mind, they may find in Greenfield (and indeed myself) allies and not opponents.
Neither I nor Greenfield are arguing that the modern internet is necessarily harmful. For myself, I am enthusiastic about the opportunity it presents to education to access information and to meet people across the world, so long as we can help young people use those opportunities productively. All that we are arguing is that, like almost any new technology, digital technology can either be used beneficially or harmfully and we therefore need to keep our eyes open with respect to what these potential harms might be, and stop dismissing as irrelevant the lack of evidence of benefit.
Above all, we must abandon the position that says that the introduction of technology into education is inevitable or that it is intrinsically good. We must start to examine the way that we introduce technology into our schools, evaluating its effectiveness against its ability to help us achieve agreed objectives. Only then will we produce a version of ed-tech that works; and only then will we have understood what technology truly is: the means by which we achieve our ends.
I should add that this little controversy by no means exhausts the potential intersection between education technology and neurology. Another important issue is the concept of digital load and the relationship between short-term and long-term memory. But that is a topic for another day.
I’m not so sure I’m an advocate for Apps for Good – I’ve filmed them a couple of times as I’m intrigued by the process. Having developers who work in industry come in and outline what they do and help mentor young people isn’t that far from STEM ambassadors in some instances. I am intrigued by the “maker” movement as a whole and how the process works. I do consider some elements facile, however, in that the ethical aspects of bringing a product to market may not be considered but I’ll leave that aside for another time.
Thanks for the clarification Leon. I guess it is mainly through you that I have come across them. And, like you, I think the whole “maker” movement offers important opportunities. As I say in the piece, I think the challenge is to dovetail this pedagogy into instructional contexts that enable people to use technology in a constructive way. I think the problems have occurred when we have assumed that we can just throw them in at the deep end and they will learn to swim.
As Michael Barber says, “The road to hell in education is paved with false dichotomies.”
There is a danger that this argument like many in Ed Tech gets muddy because they are constructed around broad terms that parties on all sides understand in their own ways.
To question the impact of the Internet is very broad since it is merely the network of interconnections that supports the various things such as the World Wide Web, online banking, social networking, email, shopping, multi user gaming or tracking my parcel delivery. In this context Hattie is right to question if it is positive or negative for children and learning.
A common misquote is that, “Money is the root of all evil.” This can justify anti capitalism and challenge materialism. The real quote is, “The love of money is the root of all evil.” Inanimate objects such as money or huge networks of interconnecting computers are never in themselves good or evil. It is the human attitude and involvement that purposes things for good or harm.
In this case perhaps it would be more appropriate to hear Greenfield’s direct concerns about Facebook (backed up by evidence) rather than general concerns about the Internet or technology. Similarly it might be good that your nemeses cited evidence of the specific areas where Greenfield’s concerns are unfounded.
In truth there are some serious issues of misappropriation of time; reduction of social relationships; addictive behaviour; voyeurism; and blind submission. These are apparent in children using computers for some things. They are also apparent in much of our culture: in television, pornography, fashion, sport, music and beyond.
I would suggest that we are specific about our concerns and open in the widest possible ways to the causes. I would also suggest that we look at the complex interplay between technology and other agents within youth culture. Addiction and consumerism are two spectres that stalk the young and I think that unspecific arguments for and against the Internet distract from the details and complexity of the issues.
I completely agree – and I hope I am not presenting the argument here as a simple dichotomy. I we need think the sort of intelligent examination and evaluation of benefit of harm, and a better understanding of how to maximise the former and minimize the latter that you suggest. The problem I have is with arguments that say we should have more technology in education, in the words of my last post, “because its there”.
Just want to point out a slight tension here. You make some very strong points here. One of them borrows Susan Greenfield’s idea of “true knowledge”. But the more general drift of the argument seems to want to rest on what you call “robust evidence”, which is, doubtless, hard empirical research.
Isn’t it the case that Sugata Mitra, for instance, would love it if there were a science of learning demonstrating that learning really is an emergent phenomenon that occurs when groups of children stumble across the right kind of tech? Of course, there is no such science so the discourse has to rest on other things, but it would ideally be a pedagogy grounded in neurology or something of that nature.
Is the most effective critique of the edtech discourse one that shares the insistence on a grounding in hard empirical research?
Sorting out what counts as “true knowledge” involves a more philosophical type of enquiry that draws on the empirical but goes beyond it. There is the question: What is knowledge at its best? (A question which implies a value judgment for which there can be no empirical basis.) Or the question: What type of knowledge (or thinking) do we most need to cultivate at this point in history? (A question which requires a reading of history that that then obviates any simple empirical proof.)
The edtech discourse, for instance, tends to support the idea that curiosity is the key to learning, but this (unless qualified by other considerations) tends to affirm an utterly fragmentary approach to learning, with people burrowing deeper and deeper into ever narrower specialisms, leaving the general culture of the public domain to sink into mindlessness. It may be that we need a kind of knowing/thinking that can best counter this terrible fragmentation. And this is perhaps the beginning of a kind of reflection on the ends of our shared existence that won’t get going while we are hamstrung by the insistence that there must be hard empirical evidence.
Just a thought.
Thank you very much Torn for the thought-provoking comment. I replied at length and then lost it. So here goes again.
As you know, I disagree with Sugara Mitra, mainly because I think that interaction and role modelling are critical in formal education. However much Sugara Mitra may want to find empirical evidence to support his position, if I am right then he won’t, outside a very contrived and limited context.
I think you use “true knowledge” in two different ways in paragraph 4:
* to refer to the sort of understanding that is achieved by cross referencing and applying beliefs – this is the sort of psychological meaning that Greenfield and I were using – but which is really about belief structures rather than justified knowledge (except to the extent that one of the justifications for any belief is internal consistency – so a belief that is well integrated into a wider model is more likely to be true than one that is not cross-referenced in this way);
* to refer to things that are worth learning. I don’t think we need to worry about this too much – subject to the caveats that come up later in your comment. We might decide to limit the curriculum for all sorts of reasons – but not because there are only a few things that are true.
So we need to ask two questions – one pedagogical/psychological question about how we encourage students to cross-reference and apply their factual knowledge; the second being about what the curriculum should be.
With reference to the second question, I agree that there are merits in focusing on a core:
* what educated adults know better than uneducated children is that certain sorts of knowledge are more useful than others – so we should guide children to those basic literacies, of which there are relatively few;
* I agree that shared knowledge forms a sort of social cement – though most of us would probably agree that you can have too much social cement as well as too little, and that we need to value diversity;
* pedagogically speaking, as the objective of education is the understanding and not the raw knowledge, it might be advantageous to select the knowledge in a way that facilitates the insights leading to understanding, rather than leaving the collection of the right combination of facts to the chance of discovery;
* pedagogically again, because peer-to-peer interactions are also helpful, I see an advantage in ensuring that all members of a teaching group share common knowledge to promote productive interactions;
* finally, from a research point of view, as the curriculum is one of the many variables that need to be managed, it helps accumulate useful quantities of data if large numbers of students share the same curriculum objectives.
But while agreeing with you on the benefits of some degree of shared curriculum, I am not sure why you see this as being hamstrung by the need for empirical evidence. As I note in point 5 above, I see the two as being mutually supportive.
On the relationship of ed-tech to empirical evidence, I think we should be interested to know whether all sorts of pedagogy (technical and traditional) work or not. And the only way to demonstrate that is empirically. Which is not to say that people should not be able to experiment with new approaches – the appeal to empirical evidence is always backward looking and therefore antagonistic to innovation. But I don’t see the problem with having an expectation that a pedagogy *will* show evidence of effectiveness, and the raising of suspicions if it does not show such evidence when people make a reasonable effort to look for it.
Thanks again for the interesting comment, which I hope I have understood aright.
Thank you for replying. Ahtough this is rather tangential in relation to the Greenfield post, I would like to have one more stab to try to clarify the point.
There are lots of problems with things like the shaky data in Mitra’s work, but, the desire of most of the edtech discourse is to rest on hard science. It belongs to a more general technocratic discourse that wants to refer us to the biggest and most well-crunched numbers. Hence it assumes a particular conception of knowledge (a conception that has been on the rise in Europe since the 17th century). According to this conception, knowledge is utterly impersonal – its validity must be completely independent of any personal experience of the world and of what matters in it. And this is a conception of knowledge that opened up a historically unprecedented rift between a realm of objective truth, which we can share with all rational beings, and a private realm of personal experience in which we perceive things to have a value that cannot raise a claim to truth and that cannot be shared except with those who, for no apparent reason, use the same hashtag that we use.
Thus the world is split into two. The idea at the origin of the modern scientific project was to develop a form of knowledge which would provide a basis for a whole new way of life. All of society would be inspired by Reason and all the old and dark mythology and superstition would wither away. But this is not what happened. What we see is the disenchanted world of science developing in tandem with the burgeoning development of a million new enchantments. God is dead; find your own idols. The rebirth of idolatry is not a sign of the failure of science, but of its success.
The practical consequence of the split is that we are able to have very civilised arguments about how to do things more efficiently and effectively (e.g. how to get children to learn to code as quickly as possible), but we have great difficulty talking with a similar sophistication about what exactly children should be learning. The currently unsophisticated suggestions refer us to the needs of society, but that is a society which does not rest on any inspiring thought about what exactly we should be doing, and so in tailoring education to the needs of society, we are tailoring it to a system that is essentially blind – not a very pedagogically sound move to make.
The tension in your work that I wanted to point out was a tension between: 1) The desire to come up with a robust critique of the edtech discourse; and 2) Your sharing of the same epistemological premises that the edtech discourse relies on.
The edtech discourse belongs to a hyper-valorisation of science (even though its marketing uses so much myth and hype). To point out how weak the science is, is useful and important. But if there is the implication that the big problem with the edtech discourse is that it is not scientific enough, criticism risks turning into a veiled pat on the back.
You want to argue (if I understand correctly) that the traditional approach to education works better than a more tech-intensive one. This is an important argument to make, but surely we don’t just want to live in a society where we see things being done effectively or efficiently. We also want to live in a society where we see the right things being done.
Because it is impossible at this point in history in the West to talk convincingly about the “right things” we might need to make do for the time being with the recognition that there must be something wrong with an economy of knowledge that reduces talk of the “right things” to an incoherent stutter. In this context, “true knowledge” refers to knowledge (which for us is science) plus the awareness of the terrible effects of the impersonalisation of knowledge.
Apologies for still not managing to make the point succinctly and clearly.
Hi Torn, Thank you very much for the clarification. How frustrating for you that I completely misunderstood you first time round! I think I understand better what you are saying now and have a good deal to say in response – but I will have to collect my thoughts and clear my desk before doing so!
Like you, I completely agree with Greenfield’s premise:
“true knowledge is how you use those facts, relate them to each other and put them together in a framework”
However I argue strongly that digital technology offers potentially unlimited ways, to achieve that goal. Education can and must build on the real affordances of technology. It is the above frameworks we should be building and augmenting using digital technologies and networks. That is certainly the focus of the work I am doing at the moment on harnessing the potential of open content and media resources for learning
With regard to the ‘rewiring’ of the brain – recent research suggest we (lay people) should accept the concept. As Neil Levy, Head of Neuroethics at The Florey Institute of Neuroscience and Mental Health points out:
“The worry is wrongheaded because all experiences change the brain. If it did not, we would not be capable of learning and storing memories. Brains are supposed to change in response to experiences; that’s a sign they’re working as they are designed to.“
Accepting the above argument, thus pulls the rug from the naysayers and alarmists who are informed by sensationalist journalism or sales pitches.
Many thanks Theo,
I agree with your substantive points (nothing abnormal about rewiring the brain, ed-tech has huge potential to build knowledge and capability) but I disagree with your conclusion that Greenfield is alarmist and wrong. Although I hope I make it clear that I disagree with her *if* she is arguing that ed-tech doesn’t have potential, I still think that as an ed-tech lay person, she is entitled to say, “if that is true, then you ed-tech guys need to come up with something more convincing that you have done so far”.
As I am sure you realise, I am not trying to do down ed-tech – but I am worried that unless the ed-tech community looks at itself more critically in the mirror, then it will not realise the huge potential that we both believe is possible.
With regard to the article you quote (I edited your link which was missing a final digit) I think it is excellent. I started by thinking to myself, “Plato was not entirely wrong to question the written word because it lures us into believing things on the basis of the authority of print rather than rationality which is only discovered by discourse” – and then I found Neil Levy saying something similar himself. And I also agree with him that one has to look at the overall social effect of technology and not just on the expertise of the individual person. Often technology works by (in the horrid jargon) de-skilling the individual craftsman but improving the final output even so.
Which brings me back to the question in my title which perhaps, even after 4,000 words, I did not really answer. The point I made was that the use of the word “alarmist” represented a slightly unworthy, pejorative attack. But if I were to offer a more direct answer, I would say that she is not alarmist given the context, which as far as I can see is in general a partisan ed-tech community which seems constantly to revert to a rather simplistic argument, that technology in education is a good-in-itself.
I don’t think we could really establish whether Greenfield is alarmist or reasonable without (as Plato would have us do) engaging her in conversation. I welcome what she says in the Booktalk interview about using technology to achieve our ends. But if we must judge her by her written word, then I would say that it offers a useful corrective to this ed-tech activism. It is useful, I think, to have someone pointing out what should be pretty obvious, that it isn’t necessarily all good.
We may differ in our interpretation of what Greenfield is saying and whether it is useful but I think we agree about the potential of ed-tech and the approach though which it needs to be realised. I think there are many of us who have made valiant efforts to make ed-tech work – but too often as isolated individuals. What I despair of is the poor quality of the public or official discourse – something which is needed badly if the ed-tech project is to make genuine progress.
It may also be worth linking to Ben Goldacre’s article on Greenfield at http://www.badscience.net/2011/11/why-wont-professor-greenfield-publish-this-theory-in-a-scientific-journal/, I do agree that Greenfield ought at least to be working towards publication in a scientific journal. Though I stick by my point that in the meantime, plausibility (a word also used by Neil Levy) is sufficient when one is putting forward a hypothesis rather than proven facts. Full empirical evidence may take a long time to produce.
Arguing in defence of Susan Greenfield is essentially click-bait for education nerds. Whilst I can agree with (most of) your ideas here, I think you are too kind to SG. She talks and writes about a subject that is quite far from her domain and, as she does so, she likens the use of Facebook and Twitter to spikes-through-the-brain and Autism. You can call this ‘provoking discussion’ or ‘challenging ideas’, but the fact that she is happy to accept speaker fees and present on a topic (that she acknowledges is outside of her expertise) suggests to me that she simply likes being listened to and has found a sensationalist/alarmist schtick that sells.
I didn’t see the bit about autism & spikes through the brain. That certainly seems extreme (though the details of the comparison actually being made are all-important – maybe you can give me the reference).
I agree we need to avoid sensationalism, superficial argument and ad hominem attacks. My piece above is mainly about what I see as the rather superficial arguments of her opponents. In fact, that is what I spend much of my time on this blog opposing, given that views on ed-tech seem to have been very largely developed on the back of online gurus rather than serious reasoning. It is my critique of social media (from the perspective of reasoned discourse rather than personal psychology) that its underlying mechanisms tend to encourage such superficial populism rather than careful discussion (“click-bait” summarises the problem nicely). Which is not to say that SG has got everything right or is basing her position on any terribly robust evidence – maybe she is, paradoxically, just another online guru.
Even so, my position, as I have been elucidating it in response to Theo, is to welcome a sceptical position on the educational benefit of online networking, as a precondition for a thoughtful conversation to occur. Whether it will is another matter – but I think in our small way, we are doing reasonably well here.
A few weeks ago she was a guest speaker at an corp. learning shindig (for skillsoft customers). I have a few on my (work) twitter timeline so got a treat. This is an example: https://twitter.com/shackletonjones/status/489720643664216064/photo/1
I think her position – certainly her tone – goes beyond healthy scepticism. She also does not comment solely on networking – she has commented on all sorts of tech including television.
I think the majority of her ‘opponents’ are the result of the RI farce or the Goldacre debunking. There are probably some trying to defend their own perspective on Ed Tech, but not the majority.
We are rather relying on N Shakleton-Jones’ suggestion of what Susan Greenfield was actually saying over this slide. It seems rather more likely to me that she is making the general, reasonable and perhaps obvious point that behaviour and attitude are linked to how the brain is wired. In which case, it would be the tweet that is reprehensible, not the talk.
As indicated in my previous comment, I do not entirely accept Goldacre’s position. Hypotheses are formed prior to the collection of evidence. In the context of ed-tech, what SG is saying has some value merely as a plausible hypothesis.
I do not entirely support SG’s position – disagreeing for example with her conclusions on ed-tech in her TES piece. And while I have not myself seen any evidence that she is being unreasonable or overly sensational, even if this is indeed the case, I still maintain that the basic point she is making (that excessive time spent online may be harmful) is plausible, is backed by evidence, and should be given rather more consideration by the edtech community than it is.
Maybe I’ll have to buy her new book, when I will be able to give you a more considered view on how reasonable her full argument is.
Thank you for (yet another) interesting and informative post!
I have been reading your blog for a few years, and though I for the most agree with your perspectives on the current use of edtech, I think the time has come to start presenting examples of good edtech. Do you have any?
Thank you for keeping with the blog Arnfinn.
My answer to your question is “not really”. My argument goes like this:
* we need education-specific software to make ed-tech work, of a type that works in an integrated environment;
* only industry can do this in a sustainable way;
* but we don’t have the markets, the industry or the interoperability standards to allow this to happen.
So all we are left with is “looking through a glass darkly” at hints and suggestions of what good ed-tech might look like, while teachers do not really have in their own hands the ability to break the impasse.
One problem with researchers and government types is that they are always interested in the evidence base. But if you need to innovate, there is by definition no evidence-base to go on. You have to evaluate what is plausible and take a risk.
So far, this is rather a negative answer. But I have been promising to post some use cases – in the sense of imagined scenarios which might support the case for the plausibility of what I am suggesting. I will try and get my first one up at the weekend. I certainly accept that, driven as I am to comment mostly critically on things that are being said and done, I do not get round enough to clarifying what it is I am proposing.
So thanks for the useful feedback, Crispin.
Thank you for responding!
If you enjoyed this discucssion, you might also be interested in…