In teaching, sometimes 1+1≠2! Here’s why.

Paul A. Kirschner & Mirjam Neelen

OK, we’re about to begin. I’ve got my slides ready. I’ve made sure that the most important things that I have to say – that is that I feel that my audience should learn – are in the presentation. Interesting headings, bullets with extensive info that they can’t miss, rehearsed that info so that I can be fluent, timed it…what can go wrong? Well, as Robert Burns wrote in his poem ‘To a Mouse’ (1785): The best-laid plans of mice and men often go awry![1]

What went wrong? Why didn’t they learn? Now, according to cognitive load theory (CLT; here’s a good blog by Adam Boxer, explaining CLT), people struggle to effectively process your message when you read your slides to them (e.g., via PowerPoint, Prezi, PowToon,…). When we say, ‘read your slides’, we mean when you read them a bulleted list with (hopefully) key information about what you want to get across to your audience or what you want your audience to learn or ‘take away’. The redundancy principle (Sweller, 2005) in CLT, explains why people struggle when you read those bullet points to them. Offering redundant information – in our example, the exact same information in two different forms; reading the text as bulleted list and audio when you read those bullet points out loud – has a negative impact on learning because it leads to excessive extrinsic (extraneous) cognitive load.

The cognitive theory of multimedia learning (CTMML; Mayer, 2005) takes it a step further. CTMML states that offering images or animations in combination with spoken text is preferred over offering a combination of images or animations with spoken text and the exact same text on screen. So, even when you use images in your presentation (which is a large improvement to a bulleted list), you still need to pay attention!

The reason why combining images/animations with spoken text is better becomes clear when we look at Allan Paivio’s (1969, 1986) dual coding theory. When combining an image and/or animation with spoken text, we simply make good use of … dual coding. Right, that’s a bit of circular reasoning of course, so let’s see what it means.

It means that the picture (image/animation) is processed through a different channel – he referred to them as memory stores – than either written or spoken words. In short: there’s a channel for ‘pictures’ and a channel for ‘words’ (spoken/written).

dual coding

One thing we’d like to add here is the reason why we learn (e.g., remember/understand) more poorly when we read our slides is because a) our limited working memory simply becomes overloaded, and b) when the audience needs to listen to what you’re reading to them while at the same time reading that same text on screen, we basically force them to multitask.

And it’s simple; humans as information processing systems can’t multitask. We can’t execute more than one information processing task at a time. In the context of our example, it means that we aren’t able to semantically decode/understand what we read and hear it at the same time.

All we can do is switch between two or more information processing tasks (task switching), but unfortunately this leads to decrease of speed (in other words, it’s less efficient) and to more errors. We can also ‘serially monotask’, which means that we can complete one process/task and then start another one; for example, we can listen to a verbal explanation first and then read a text. This requires the instructor giving the audience the time to do that, but as is often the case: Once (s)he has read the last word on the often too full slide, (s)he clicks to the next one and here we go again.

It’s sad but simple: We humans can’t multitask.

Recently, we also have neuroscientific evidence for our statement that people can’t process a verbal and written explanation that take place at the same time. Fatma Deniz, Anwar O. Nunez-Elizalde, Alexander G. Huth, and Jack L. Gallant from the University of California, Berkeley studied how people process spoken and written language. More specifically, they looked at which parts of the brain are ‘busy’ (active) when either listening or reading. What they did was use a “direct and convincing way to determine if listening and reading involve a common underlying semantic representation” by directly comparing the semantic selectivity maps obtained during listening and reading a text. They did this by using functional magnetic resonance imaging (fMRI) to record blood-oxygen-level activity in people while they listened to and read the same narrative stories. They write:

Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received. (p. 2)

Language is a complex process that involves many regions of the brain. Interestingly, we previously thought that the brain dealt with spoken and written information differently (when Mirjam studied psycholinguistics back in the day, the models used indeed implied this).

However, this appears NOT to be the case. First author Fatma Deniz said in an interview: “At a time when more people are absorbing information via audiobooks, podcasts and even audio texts, our study shows that, whether they’re listening to or reading the same materials, they are processing semantic information similarly.…We knew that a few brain regions were activated similarly when you hear a word and read the same word, but I was not expecting such strong similarities in the meaning representation across a large network of brain regions in both these sensory modalities.”

This research gives us even more evidence that we shouldn’t use redundant information in our lessons and/or presentations. After all, if we provide both the written and the spoken words at the same time, we force people to multitask and we already concluded that we can’t.

Long story short, there’s sufficient evidence that combining functional images with a verbal explanation works way better than combining reading and talking, or even worse, reading a bulleted list.

And yes, the visual design of the example below can definitely be improved (think of a bear with the honey stick in its paw smiling) but that’s not the point. Let’s all go ‘non-redundant’ to make sure we don’t become redundant 😊. Never ever read your slides again, cause in that case, 1+1≠2!

honey

References

Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019). The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality. Journal of Neuroscience, 39, 7722–7736. https://doi.org/10.1523/JNEUROSCI.0675-19.2019

Mayer, R. E. (2005). Principles for reducing extraneous processing in multimedia learning: Coherence, signalling, redundancy, spatial contiguity, and temporal contiguity principles. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (p 183-200). New York, NY: Cambridge University Press.

Paivio, A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76, 241-263. http://dx.doi.org/10.1037/h0027272

Paivio, A. (1986). Mental representations: A dual-coding approach. New York, NY: Oxford University Press.

Sweller, J. (2005). The redundancy principle in multimedia learning. In R. E. Mayer (Ed.). The Cambridge Handbook of Multimedia Learning. (p 159-167). New York, NY: Cambridge University Press.

[1] He actually wrote: The best laid schemes o’ Mice an’ Men

Gang aft agley,

An’ lea’e us nought but grief an’ pain,

For promis’d joy!,

11 thoughts on “In teaching, sometimes 1+1≠2! Here’s why.

  1. Roger Brownlie says:

    Great post, thanks. One challenge of slide decks is they are used in different scenarios, typically: to present to a live audience; to share with an audience for them to consume on their own. There’s rarely time to make 2 versions of the slides, so inevitably text is added to the slides so they make sense on their own. One solution to this is to add speaking notes, in the speaking notes area.

    Like

    • 3starlearningexperiences says:

      Yes, I see that in my organization as well (creating one slide deck to use for both presentation and as an ‘independent resource’) but it’s really bad practice. Even if you move the text to the speaker notes, you usually still end up with a lack of signalling etc to guide the audience (different topic, not discussed in this blog, I know). It could be an option in some cases, I’d think? Effective design for presentation is just different from effective design for reading/consuming. I wish orgs would think better about what they’re trying to achieve because with the current practice the risk is that NEITHER audience will achieve the desired objective.

      Like

  2. MarioA says:

    Richard Mayer mentions that there are some important boundary conditions to the redundancy principle: it “may not apply when no graphics are presented (Moreno & Mayer, 2002), only a few key words are printed on the screen (Mayer & Johnson, 2008), or the onscreen text is worded differently than the spoken text (Yue, Bjork, & Bjork, 2013).”

    Have you had that experience regarding multimedia presentations?

    Thanks,

    Mario

    Like

  3. MarioA says:

    Richard Mayer mentions that there are some important boundary conditions to the redundancy principle: it “may not apply when no graphics are presented (Moreno & Mayer, 2002), only a few key words are printed on the screen (Mayer & Johnson, 2008), or the onscreen text is worded differently than the spoken text (Yue, Bjork, & Bjork, 2013).”

    Have you had that experience regarding presentations?

    Thanks,

    Mario

    Like

Leave a comment