Paul A. Kirschner & Mirjam Neelen
We’re both quite cynical and critical people (although we’re nice as well, most of the time). We also value nuance. So when we make an argument and later we realise that the subject’s more nuanced, we course-correct. That’s why we wrote this blog. In 2015, we wrote ‘Learning the Smart Way’, in which we discussed a top 5 (effective) and a flop 5 (ineffective) learning strategies.
After some long and hard thinking, as well as reviewing some new evidence, we’d like to bring some nuance into the discussion. Nothing is absolute; everything is relative. Just about anything we do as teacher or instructional designer will work somewhere in some situation with someone. On the other hand, nothing that we can do works everywhere with everyone. When we choose to use the microwave oven, the efficacy of our microwaving depends on what we want to cook (a potato is different from a steak), the specific qualities of the ingredients (a steak or string beans will get tough due to the fibres while a potato gets soft), your skill as a cook (how long does 1kg of asparagus take if the oven’s handbook says X minutes for 500g and should you use the steamer in your microwave or just cover the asparagus and zap them?), and so forth. It’s not simply that “microwaving doesn’t work”. When it comes to learning, people have and use different study skills (NOT different learning styles)!, different levels of expertise (see, for example, the expertise reversal effect), different amounts of knowledge of the subject (not necessarily the same as expertise), different learner needs, different motivations, and so further.
Let’s take a few of the most well-known ‘study strategies’ that don’t work (see the ‘flop 5’ back in 2015):
John Dunlosky and his colleagues (2013) reported that highlighting a text (using coloured markers, underlining), though one of the most used study strategies by students, doesn’t work. Indeed, as far as we know there’s no direct connection between the end of a felt-tipped marker or the nib of a pen and our brains whereby what we mark or underline is magically transported to and stored in our brains. But to understand why they don’t work and whether they could work, we first need to look at how people mostly use highlighting and/or underlining and see whether this is the reason they don’t work? In other words: Is it the tool or the user? Most people, independent of age, read through a text and mark what they think is important when studying. Sometimes they even use different colours to highlight different things (e.g., themes or concepts). Having done this, many if not most feel that they’re done; they’ve read the text and highlighted the important parts and thus, so they believe, have learnt from it. This immediately ‘highlights’ (no pun intended) a number of problems.
First, just highlighting something doesn’t mean that it has been processed in/by our working memory and has consequently been stored in our long-term memory (i.e., has been learnt). It’s kind of the same as when you copy and paste something that you’ve read online into a new document. Highlighting or electronically copying and pasting doesn’t involve much – if any – thought or processing of the information so therefore there’s no learning.
A second problem is that we assume that people are capable of separating the wheat from the chaff in what they’re reading. This is a dangerous assumption. If you’re reading something that’s new to you and you don’t have the necessary background knowledge to really understand it, then you really don’t know what’s important and what isn’t. The best you can do is either guess or make use of textual cues (e.g., boldface, italics/cursive, key words like ‘thus’, ‘as a result’. ‘because’…). In other words, there’s no guarantee that what someone thinks is important and, thus, chooses to underline or highlight is actually important. And if you’ve highlighted irrelevant information or information that really isn’t important, the highlighting has, at best, no function.
But can highlighting be effective? And if so, how? The answer is yes, it can be effective! How? Well, for example, if the teacher takes the time to lay a proper content-related foundation – for example by presenting a good advance organiser – they present the learner with knowledge and a cognitive schema which increases the chance that their students will be able to separate the wheat from the chaff and properly process it into long-term memory. A next step might be that the teacher models the highlighting behaviour showing what they themselves would have chosen to highlight and why. Modelling is a very powerful technique, especially when used with/for novices! Another or an additional thing would be for the teacher to look at what’s been highlighted by individual students or the class as a whole and then provide feedback: pointing to what was chosen as important but wasn’t or to things that weren’t highlighted but which should have been. Finally, students need to learn how to use this technique; that highlighting or underlining is only a first step. What do you do AFTER highlighting? How do you use it to study? This must also be taught to and practised with students.
We must say that when we first read the Dunlosky et al. article from 2013 on the efficacy of the ten often used and studied study strategies we were confused. We ‘knew’ that self-testing, due to the retrieval effect as well as what Robert and Elisabeth Bjork (2014) call the generation effect, worked really well when it came to learning and retention. In our mind, summarising something should be the optimal type of retrieval practice/self-testing. Summarising requires you to retrieve information from your long-term memory, reprocess it in working memory, and then reformulate it either orally or textually. This requires not only retrieval, but also paraphrasing (i.e., putting things in your own words) which is a strong way of processing information. At the same time, Paul was co-supervising (along with Liesbeth Kester) a PhD candidate (Kim Dirkx) who was studying the effects of retrieval practice in high school students. One of the conditions in her research was, along with a reread condition and a ‘traditional’ retrieval (i.e., self-test) condition, that the students had to make a summary of what they had read. Expecting to find that self-testing was better than just rereading they also expected summarising to be at least as good as self-testing if not better. It wasn’t. So, Dunlosky et al. found that summarising wasn’t effective for learning and this was, unexpectedly, confirmed by Kim’s research. Why? To try to understand/determine why summarising didn’t work, they looked at the summaries made by the students and, lo and behold, the summaries were terrible. They had fallen into the trap of assuming that high school students who had once learned how to write a summary were actually capable of doing just that. What could have been the problem?
First off, did the students actually have the necessary domain-specific knowledge to determine what should be in the summary and what shouldn’t? This was a problem for highlighting/underlining, but also a problem for summarising. As David Ausubel wrote in 1963, the most important factor for learning new things is what you already know. If you know little about a specific topic, it’s very difficult to determine what’s important and what’s not. Carl Bereiter (1985) spoke of the learning paradox, namely that “if one tries to account for learning by means of mental actions carried out by the learner, then it’s necessary to attribute to the learner a prior cognitive structure that is as advanced or complex as the one to be acquired” (p. 202). Without a prior cognitive structure, it’s difficult to order and evaluate new information.
A second problem is that summarising is a skill that needs to be learnt and practised. Writing a summary is probably taught somewhere along the ‘curriculum line’ (e.g., in elementary school and secondary school in a language lesson), , but being taught something once or twice isn’t enough to learn a procedure, let alone acquire such a complex skill. Summarising, after being properly taught, needs to be practised, practised, and practised some more. And it needs not only to be practised in the language class, but should also be used and practised in biology, history, civics, economics, art history, and so forth. Also, it not only should be practised in those different situations (this is tantamount to interleaving on the one hand and contextual interference on the other!), but the teachers there should also, just as with highlighting, look at the summaries, give the students individual and collective feedback, model how a good summary of the history or biology text might look and why, etc. Also see our collaborative blog with Tine Hoof and Tim Surma on summarising as a productive learning strategy.
The third and final example – though there are more – is concept mapping and all of its related mapping techniques (mind maps, flow charts,…).
Concept mapping is ‘in’, not only as a hype, but also as a generative learning strategy as described by Logan Fiorella and Rich Mayer. We talked about this in a past blog. We wrote:
Mapping is a productive learning strategy that we usually know as mind mapping, concept mapping, and so forth. It involves extracting the most important concepts from an information source and arranging them in such a way that the interrelations between those concepts become visually clear. Mapping can also make the interrelations between the new information (the presented concepts) and prior knowledge explicit.
Unfortunately, mapping tends not to work. The reasons for this aren’t much different from the previous two. Mapping requires the learner to first choose important concepts from a text, lecture, film and so forth and then relate or connect those concepts to other concepts, sometimes enumerating their relationships too (i.e., in a concept map). The problem here is again that with little knowledge, the student has problems choosing the most important concepts and also determining/naming their interrelationships.
A second problem is that students (children and adults alike!) are often busier making ‘pretty’, colourful diagrams than they are with processing and arranging the information. It’s a functional strategy and not a work of art.
Finally, if the concept- or mind map is ‘wrong’, then the best that can happen is that nothing is learnt and the worst is that incorrect terms are chosen and incorrect relationships are made.
And the solutions to these problems echo those of the other techniques (prior knowledge, modelling the procedure, examining and correcting maps both individually and with the whole class,…) with the exception that teachers can also structure the mapping process in different ways. A teacher, for example, can present either an empty or partially filled map (see this as similar to a partially worked-out example) so that the students have a template with which to work. The teacher can also present an empty piece of paper (or desktop) with the relevant types of mapping elements and relationships along the borders that students can use to structure and guide their own maps.
And so the story goes…
When we wrote our blog back in 2015, discussing the ‘top’ and ‘flop’ 5 learning strategies, we might have come in a bit too strong. Actually, maybe except for rereading, there are no ‘top’ or ‘flop’ learning strategies necessarily, but only top or flop uses of learning strategies.
Bereiter, C. (1985). Toward a solution of the learning paradox. Review of Educational Research, 55(2), 201–226. https://doi.org/10.3102/00346543055002201
Dirkx, K. J. H. (2014, 11 April). Putting the testing effect to the test. Why and when is testing effective for learning in secondary school. Unpublished doctoral dissertation. Heerlen: Open University of the Netherlands
Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J., & Willingham, D.T., (2013). Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, p. 4-58
 “The expertise reversal effect occurs when the relative effectiveness of different learning conditions reverses with changes in the level of learner expertise.” In other words, what works for a novice or beginner often doesn’t work for or even hinders learning in an expert and vice versa. See Kalyuga (2018) at Expertise Reversal Effect and its Instructional Implications (chartered.college)