Paul A. Kirschner & Mirjam Neelen
The seed of this blog was sown in a podcast trilogy by Freakonomics on what is or could potentially go wrong in the medical world and the history of those mistakes (Bad Medicine). Paul ‘binge listened’ the three podcasts because the number of ‘ignored’ podcasts on his smartphone reached epic proportions. He hadn’t touched the podcasts since 9 November 2016, the day that the world woke up in instead of from a nightmare named Donald Trump. Paul made the decision to go cold turkey on anything that had to do with the United States, though others might say he chose to exist in a bubble of denial. Anyway, Paul somehow found the courage to start listening to his podcasts again and the ‘bad medicine’ ones were his first venture back into reality.
Back to business. The Bad Medicine podcast was a triptych on failures in medicine and their causes. Think here about once accepted treatments such as blood-letting, trepanation (drilling or scraping a hole in someone’s skull), treating diseases such as syphilis with mercury, etcetera. One of the leitmotivs of the podcasts was something called ‘eminence-based medicine’.
Detail from The Extraction of the Stone of Madness (Hieronymus Bosch 1488–1516) – Public Domain
From eminence-based to evidence-based medicine
Most of those reading this blog probably know what evidence-based medicine is: medical decision-making based on evidence as provided by randomised controlled trials, also known as RCT. In other words making medical decisions based on proven positive effects. In such a RCT experiment, one group of random patients gets an intervention (e.g., medication, therapy, surgery, of some other kind of treatment) while another group (the control group) doesn’t get any intervention, is given a placebo, or receives a different type of intervention (often in addition to the former two). Also, if this is carried out in a double blind way, neither the researchers nor the participants know who is in which group. In this way, it’s possible to determine if the experimental intervention is more or less effective than no intervention or than a different type of intervention.
Now you might start yelling, “well it’s not so that all RCTs are of high quality as sometimes: researchers cherry-pick their participants to get better results (i.e., they choose patients with a condition who are not really representative for the whole population of those with a condition by choosing patients who are younger, healthier, etc. than the real population), pharmaceutical investors try to influence the research so as to profit from positive results, and/or by results that are disappointing, researchers might not be able to or be willing to publish this (e.g., publication bias), and so forth.” However, despite all its challenges, RCT is the golden standard for this type of intervention research.
Because we hear the term evidence-based medicine all the time, we might think that this has always been the way that medicine has been practiced; that is, that it has a relatively long history. Spoiler alert!! This is not the case. Starting with Hippocrates – ‘father’ of western medicine – until approximately the last decade or two , the field of medicine based its decisions on three main pillars: (1) ‘experience’, in other words trial and error practices, (2) the ‘we’ve always done it this way’ approach, and (3) eminence-based practice. This blog focuses on the last one.
Eminence-based medicine refers to clinical medical decision-making based on the opinion of a medical specialist or ‘prominent’ healthcare specialist (more often than not, these are grey male eminences, or old cronies). Now, that’s quite different from making a decision based on a critical judgment of available scientific evidence. People in general have a tendency to believe that such eminent notables have tons of knowledge and wisdom and that, hence, their opinion on a certain matter is sufficient to justify a clinical decision. Often times, we think that such an ‘authority’ or ‘expert’ knows more than anyone else and is therefore credible. However, it’s critical to remember that without solid empirical evidence their opinion is as good – or as bad – as anyone else’s. In other words, their opinion ‘ain’t worth a damn’ when it’s about making critical decisions.
The good news is that medicine has mostly shaken off this eminence-based approach. The field now even collects, reviews, and evaluates all available information on scientific approaches in medicine. The Cochrane Collaboration started this process in 1993. It was an initiative of medical researchers and in particular Iain Chalmers. The Collaboration has over 37,000 medical research volunteers in over 130 countries whose goal is to answer important questions in medicine through creating meta-analyses of RCTs in a specific area[1]. Their reviews – all part of a central register called the Cochrane Library – are the gold standard of evidence in medicine and often times have the final say in debates on all kinds of medical subjects[2].
What does all of this have to do with learning and education? Well, the point is that education unfortunately doesn’t really take an evidence-based approach to the profession and perhaps it’s not even close to being ready for such an approach (also see our blog Will the Educational Sciences Ever Grow Up?). In learning and education sciences we don’t only have a range of individual, often small-scale studies of a problem or intervention for which clearly no recognised database of review studies exists, we are also blessed with a range of ‘quacksperts’, ‘eduquacks’, and ‘eduquackademics’ who propagate a certain type of eminence-based educational approach. Let’s take a close look at some of those ‘eminences’ (and pseudo-eminences; in other words eminences who suffer from the expertise generalisation syndrome) and their proposed educational or learning interventions.
Let’s meet our educational eminences, aka eduquacks
Let’s start with Sugata Mitra, Professor of Educational Technology at the School of Education, Communication, and Language Sciences of Newcastle University (United Kingdom). However, a minor detail is that he doesn’t have any credentials or expertise in the area of education. He actually has a PhD in Solid State Physics (Indian Institute of Technology in Delhi) for which he studied Organic Semiconductors (sounds impressive and probably is, but neither of us suffers from the expertise generalisation syndrome and thus can’t say anything about it), but has precious little to do with education and learning. Unfortunately, the fact that he doesn’t have the educational expertise doesn’t prevent him from preaching nonsense such as “Knowledge is an out-of-date thing that came from a time when it wasn’t possible to have access to knowledge at the point-of-need (read: Internet).”
Mitra, apparently, doesn’t seem to know that there’s a difference between data, information, and knowledge. Of course there’s a wealth of information (some reliable, some totally unreliable) from a ton of different resources (we repeat, some reliable, some unreliable) available on the Internet. However, this shouldn’t be considered knowledge (also see our blog Why Google® Can’t Replace Individual Human Knowledge). Based on this type of nonsense, Mitra blabbers all kinds of things. For example, that he has discovered that if people, and especially children mingle (his word, not ours) with the Internet, they no longer need to know anything. In other words, groups of children can learn almost anything all alone and without the help of teachers through their exposure to the World Wide Web. Or one of our favourites, “Knowing is an obsolete idea from a time when it was not possible to access or acquire knowledge at a moment of need. The idea of knowing assumes that the brain must be “primed” in advance for circumstances that may require knowledge. Just in case.” Actually it does!
There’s plenty of evidence that this claim is simply not true while Mitra himself has never presented any evidence in reputable publications to prove that his claims are correct. However, and now we get to the elephant in the room, followers of eminence-based education love to swallow this snake oil.
Excellent articles on Mitra and his nonsense have been written by:
- Tom Bennett: https://www.tes.com/news/blog/sole-snake-oil-learning-experience
- Pedro de Bruyckere: https://theeconomyofmeaning.com/2013/03/18/sugata-mitra-faces-quite-a-backlash/
- Janelle Ward: http://www.thebrokeronline.eu/Blogs/Janelle-Ward/A-critique-of-Hole-in-the-Wall-HiWEL
- Donald Clark: https://donaldclarkplanb.blogspot.fi/2013/03/sugata-mitra-slum-chic-7-reasons-for.html
Let’s move on to Sir Ken Robinson, who has a PhD in drama and theatre in education (which, by the way might explain his excellent rhetorical techniques, his charm, and his convincing presentation skills!). He propagates the claim that schools “kill creativity” (yep, nothing less!). However, there’s a monster truck (with at least a double trailer) of research which shows that creativity heavily depends on knowledge and that useful creativity is virtually impossible without knowledge. This doesn’t, however, stop Sir Ken from shouting that “Schools kill creativity” from every rooftop whether or not anyone wants to hear it (and a sad detail is that a lot of people DO want to hear it for some strange reason). Keith Sawyer[3] said:
I believe that schools are essential to creativity. We’ve learned that creativity requires a high degree of domain knowledge… Formal schooling is quite good at delivering this domain knowledge to students. Creativity research certainly doesn’t suggest that everyone would be more creative if only we got rid of all of the schools! However, schools could better foster creativity if they were transformed to better align with creativity research (p. 390).
Some excellent pieces on why Sir Ken’s arguments are problematic can be found here:
- Crispin Weston: https://edtechnow.net/guest-posts/ken-robinson-rebuttal/
- Joe Kirby: https://pragmaticreform.wordpress.com/2013/10/12/what-sir-ken-got-wrong/
- Carl Hendrick: https://www.tes.com/news/school-news/breaking-views/ken-robinson-a-teacher-basher-schools-must-stop-listening-his
Next in line is the father of the digital native, Marc Prensky who holds a degree in French, an MAT, and an MBA from Harvard Business School (beginning to recognise a trend with our quacksperts who (second spoiler alert) actually don’t have much, if any, expertise in the domain they quack about?). Prensky observed something; namely that children apparently were multitasking on their devices. Based on this observation, he concluded several things, such as that children can actually multitask (which is actually not possible) and that they have developed unique characteristics that distinguish them from previous generations. He also said that today’s children have sophisticated technical skills and certain learning preferences and that traditional education ignores both of them.
Next, without conducting any proper research on what he observed and concluded, he published two articles (part 1 & part 2) on the matter (followed by some well-selling books) which suited the spirit of times quite well. Despite all the evidence proving that the contrary is true (children are unable to multitask and learning preferences are not always; actually usually NOT effective), the words of this eminence form the foundation of a lot of misconceptions in and on education. Paul’s article that he wrote with Jeroen van Merriënboer (includes scientific evidence!) makes mincemeat of the digital native in no time.
Also read:
- Donald Clark: http://donaldclarkplanb.blogspot.fi/2013/02/prensky-game-on-digital-natives.html
- Henry Jenkens: http://henryjenkins.org/2007/12/reconsidering_digital_immigran.html
- Ellen Helsper and Rebecca Enyon: http://tinyurl.com/lrh4jxn
The list is virtually endless. A short list could include the following names as well:
- Carol Black (Writer and TV-maker of Ellen and The Wonder Years) – What the modern world has forgotten about children and learning,
- Amy Chua (Lawyer) – Battle Hymne of the Tiger Mother,
- George Lucas (Filmmaker; e.g., StarWars and Indiana Jones) – The George Lucas Educational Foundation (Edutopia), and
- Bill Gates (Microsoft co-founder) – K-12 Education program, and…
- Add as many as you like!
Dear readers: Don’t you think it’s time to get rid of our belief in such eminences and move toward evidence-informed practices? Isn’t it time to see that these emperors have no clothes? We’d say “YES” and that that we, human beings, and especially our children deserve it!
Illustration of The Emperor’s New Clothes (Hans Christian Andersen by Vilhelm Pedersen, Andersen’s first illustrator
[1] There are five types of Cochrane Reviews:
- Intervention reviews assess the benefits and harms of interventions used in healthcare and health policy.
- Diagnostic test accuracy reviews assess how well a diagnostic test performs in diagnosing and detecting a particular disease.
- Methodology reviews address issues relevant to how systematic reviews and clinical trials are conducted and reported.
- Qualitative reviews synthesize qualitative evidence to address questions on aspects other than effectiveness.
- Prognosis reviews address the probable course or future outcome(s) of people with a health problem.
[2] Summaries are available for free at http://www.cochrane.org/evidence
[3] Sawyer, R. K. (2012). Explaining creativity: The science of human innovation. Oxford: Oxford University Press.