The Power of META: Make Education Terrific Again

RETRIEVAL PRACTICE AND FEEDBACK FOR READING COMPREHENSION


Paul A. Kirschner & Mirjam Neelen

Tips from two overview studies: Learners learn better when using practice tests and they remember a text better if you hold off giving feedback until they’re done reading

In the midst of these extraordinary times we discuss two relevant overview studies, which each in their own way contribute to better teaching and learning. The first is a literature study from Sally Binks. The study is on the effects of practice tests (these are tests to promote learning; also called retrieval practice) and discusses how testing affects learning (Also see our recent blog ‘Terrified of Testing? Tackle It Through Testing!’ as well as an older one ‘And the Winner Is…Testing!’).

Just to be clear: You can use testing for three purposes, namely to:

  1. determine what a learner has learned and grade the result (summative assessment),
  2. follow how a learner’s learning process progresses and how the instruction aligns with that (formative assessment), or
  3. promote the learning of the learner (testing/retrieval as a learning strategy: retrieval practice or practice testing).

In the case of retrieval practice, you design for what are known as ‘desirable difficulties‘ (Bjork and Bjork, 2011). The Robert Bjork describes it as follows:

Instructors and students alike are susceptible to assuming that conditions of instruction that enhance performance during instruction are the same conditions that enhance long-term learning. That assumption, however, is sometimes dramatically wrong: Manipulations that speed the rate of acquisition during instruction can fail to support long-term retention and transfer, whereas other manipulations that appear to introduce difficulties and slow the rate of acquisition learner can enhance post-instruction recall and transfer. Such manipulations, labeled desirable difficulties… include spacing rather than massing study opportunities; interleaving rather than blocking practice on separate topics; varying how to-be-learned material is presented; providing intermittent, rather than continuous, feedback; and using tests, rather than presentations, as learning events.

The gist of desirable difficulties is that although it might seem that you make learning more difficult, in the long run you actually make it easier for learners to remember stuff. With practice testing, you ask them to actively retrieve what they’ve learned (read, heard, seen,…) from their memory (again, also see our blog on retrieval practice here).

On the basis of her literature study, Binks concludes that retrieval practice positively influences learning, both directly and indirectly. It ensures that learners remember what they’ve learned better and longer than if, for example, they repeatedly study the materials (even if they spend more time on it!). It also leads to better storage of new information (encoding) and learners are more committed: with regular practice they go to study more. By the way, this is no surprise. After all, success (as in: achieving goals) leads to motivation. Last, the feedback you provide – which is of course always part of retrieval practice as well – also helps learners make better choices with regards to those topics or materials that they need to pay more attention to (metacognition). After all, if they found out through practice and feedback what they’ve already ‘nailed’ and what they might have misunderstood or find difficult, it’s easier for them to determine where to focus their attention while studying.

Finally, through practice testing, learners seem to be able to better apply what they’ve learned in other areas (transfer).

  • Use ‘production tests’ (where learners actually need to produce a response or artefact) instead of recognition tests (e.g., multiple choice).
  • Start with retention questions and work towards application questions.
  • Test often but with sufficient time in between.
  • Include explanations as to why an answer is correct or incorrect (directive feedback – also see our blog ‘No Feedback, No Learning’).

Her ‘last but not least’ tip: Let learners experience the benefits of practice tests themselves. Let them see that their test scores after having had a series of practice tests are better than when they ‘only’ reread the required materials for example.

The second study by Swart and colleagues is about giving feedback after ‘testing’ (and hence is related to Binks’ study). There are many studies on this topic and they all agree on the importance of feedback. However, to be able to give feedback effectively as an instructor (or in whatever role you need to provide feedback), it’s hard to see the forest for the trees. After all, there are different types of feedback, you can use varying degrees of elaborateness, and you can give feedback at different times. The questions this study tried to answer were: Which types of feedback support learning and which possibly harm it? When in the learning process is it most effective to give feedback? Those are the questions to answer if you want to truly support learners.

Elise Swart and her colleagues from Leiden University in the Netherlands studied the effects of the content and timing of feedback on reading comprehension. They analysed 104 (partial) studies with 6,124 participants and found that learners understand and remember a text better when they receive feedback. In addition, the feedback works better if the person providing the feedback not only tells the learner whether the answer is correct or incorrect (corrective feedback), but also explains to them why the answer is correct or incorrect and offers additional instruction where applicable. Swart and her colleagues also conclude that you should never give the feedback while the learner is still reading, but instead should always provide it immediately after.

This may sound strange: we ‘know’ that we need to give feedback during a task if possible, in order to correct misunderstandings immediately (for novice learners anyway). But Swart et al’s research shows that, when it’s about reading comprehension, it’s best to have a little patience. This can be explained by cognitive load theory. Feedback during reading forces readers to divide their attention between building a coherent mental model of the text, processing the task, and the processing feedback. In other words, it requires the reader to multitask (which the reader can’t! Actually NO ONE can. See, for example, this blog by David Didau). If you give feedback (immediately) after reading, you give learners the opportunity to fully use their limited working memory capacity to create a mental model of the text.

‘En passant’, the second study also shows what the meta-analysis might stand for: Making Education Terrific Again. 😊

References:

Binks, S. (2018). Testing enhances learning: A review of the literature. Journal of Professional Nursing, 24, 205-210.

Bjork, E. L., & Bjork, R. A (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, & J. R. Pomerantz (Eds), Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society (pp.59–68). New York, NY: Worth Publishers.

Swart, E. K., Nielen, T. M. J., & Sikkema-de Jong, M. T. (2019). Supporting learning from text: A meta-analysis on the timing and content of effective feedback. Educational Research Review, 28.

One thought on “The Power of META: Make Education Terrific Again

Leave a comment