Evidence-Informed Pedagogy

Em. prof. dr. Paul A. Kirschnera, b & Tim Surma, MSca

a ExpertiseCentrum voor Effectief Leren (ExCEL), Thomas More University of Applied Sciences, Mechelen, Belgium

b kirschnerED, Educational Advice and Training

This is the opening editorial that Tim Surma and I wrote for the Autumn 2020 edition of Impact on evidence-informed pedagogy. The reason that we collated a publication with this theme is simple and really straightforward: If we, as educational professionals, choose to inform the choices that we make for our practice by the best available evidence, we can make meaningful striking enhancements in our pedagogical practice, and thus on the efficiency, effectiveness, and success of our teaching and of children’s learning.

Welcome to this edition of Impact on evidence-informed pedagogy. The reason that we have collated a publication with this theme is simple and really straightforward: If we, as educational professionals, choose to inform the choices that we make for our practice by the best available evidence, we can make meaningful striking enhancements in our pedagogical practice, and thus on the efficiency, effectiveness, and success of our teaching and of children’s learning.

What is evidence-informed pedagogy?

Some educational policy makers, politicians, and teachers use the term evidence-based when they speak of instruction and teaching while others (we for example) use the term evidence-informed. Is there a difference, and if so, then what is it? There is an, albeit sometimes subtle, distinction between evidence-based and evidence-informed in terms of practice in education. Originating in medicine but now used across numerous professions such as economics, technology, and agriculture, an evidence-based practice is an approach to practice that focuses practitioner attention on sound empirical evidence in professional decision making and action (Rousseau & Gunia, 2016). In medical research for instance, research processes are more rigorous, more well-defined, and better controllable than in educational sciences which makes outcomes more distinct and reliable. As Neelen and Kirschner (2020, p. 3) state:

Sackett et al (1996) see it as a three legged stool integrating three basic principles: (1) the best available research evidence bearing on whether and why a treatment works, (2) clinical expertise of the health care professional (clinical judgment and experience) to rapidly identify each patient’s unique health state and diagnosis, their individual risks and benefits of potential interventions, and (3) client preferences and values.

Here everything is clean cut. The target population is clearly defined with respect to age, weight, disease, and so forth. Also, the directions for use are clear-cut, for example that the medicine should be consumed on an empty stomach, one hour prior to eating.

Evidence-informed practice is still based on empirical evidence, but acknowledges the fact that it’s harder for real classroom practice to determine what works for who under which circumstances. What seems to works in one classroom does not always work in another classroom. Five year olds are different from fifteen year olds both with respect to their cognitive development as their knowledge and expertise, a lesson on concepts and definitions is different from a lesson on applications, and to a lesser extent a lesson in chemistry differs from a lesson in drawing. Also, what works for one teacher might not work for another because teachers differ qualitatively; subtle and not so subtle differences between teachers mean that the way that they carry out the same thing differs both in how it is carried out and how it is perceived by their students. Also, what works in a lesson today won’t necessarily work in the same lesson this afternoon, tomorrow, or in three months. Just the fact that learners are different with respect to their prior knowledge, beliefs, needs, and/or motivations to participate, can change everything. Unfortunately, this entropy (i.e., lack of order or predictability) of the classroom does not allow us to predict with statistical ‘certainty’ which intervention will yield which effect and when. Even in perfect circumstances with the best prepared lessons, some of our students might still underperform, despite the evidence brought to us by eminent cognitive and educational psychologists. While ‘evidence-based’ provides fairly hard results, ‘evidence-informed’ is less hard, but still very useful with a higher chance of success if applied thoughtfully. That is why in this issue we advocate a pedagogy informed by evidence, more than a pedagogy based (or dictated?) by evidence. The challenge of going from the evidence to the design of actual pedagogical practices in the classroom calls for a deep understanding – let’s call it pedagogical knowledge – of what, why, when something works in optimal conditions in order to have, for example, conversations with your fellow teachers and headmasters on certain pedagogical decisions or actions.

Second, the literature presents a variety of accounts of exactly what pedagogy is. Since pedagogy has both broad and narrow definitions, we have had to make a choice and have chosen to follow Dylan Wiliam (2018) in using the broad definition. He cites Alexander (2008) in stating that pedagogy is “what one needs to know, and the skills one needs to command, in order to make and justify the many different kinds of decision of which teaching is constituted” (p. 47). It can be seen as the act and discourse of teaching (Alexander, 2004). Pedagogy therefore includes instruction but is broader and also embraces the interplay between factors that influence teaching and learning. Both evidence-informed practice and pedagogy assume that the educational professional knows what might the best options for optimal teaching and learning under given circumstances (knowing your repertoire as a teacher).

What do we know about teacher repertoires? We have already learned a lot about classroom practices from the abundance of quality research conducted in laboratories and schools, online and offline, and virtually anywhere and when teaching and learning takes place. The evidence is out there. The past few decades, researchers have designed interventions and devised general techniques – not rarely based on a two way street interaction between teachers and practitioners – that work or do not work for particular learners of particular ages undertaking particular academic tasks in particular subject areas (see, for example, Roediger & Pyc, 2012). Some fundamental techniques from cognitive and educational research were derived from this substantial base of empirical research and some of these are gaining attention as they hold the potential that they are sufficiently general that they can be applied in a range of academic subject matter areas and readily implemented in classrooms across all ages (for an overview of learning strategies see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Several examples of effective techniques are elaborated in this issue as these general approaches may need domain-specific adjustments to maximise their promise as learning tools for particular domains, which in itself is a shining example of evidence informed practice. Retrieval practice, the act of engaging in active recall of already learned information (see Roediger & Karpicke, 2006), is adapted to the perspective of CPD (Beauchamp, this issue); Clare Badger transfers cognitive load theory (Sweller, 1998) into practical guidelines for chemistry courses; the provision and function of individual feedback (Hattie & Timperley, 2007) is tackled by Caroline Locke; and popular learning myths (Kirschner & Van Merriënboer, 2013) are challenged by both Jonathan Firth and Jennifer Zyke as well as by Lewis Baker.

Some other evidence principles are much less domain-general, which is why they were once called theories of subject matter by Richard Mayer (2004), and some of them are still conceived as a “unique and monumental contribution of educational psychology to the science of learning” (Mayer, 2017, p. 175). Striking examples are the theory of how people learn to read (i.e., explicit phonics instruction), learning a second language and so forth. This issue also pays attention to the subject-specific uniqueness of teaching, with a focus on a selection of subjects that are less often in the spotlight of educational research, such as arts (by Gatward) and physics (by Astolfi).

Although we may now sound endlessly full of self-confidence regarding evidence informing education, we must of course temper our enthusiasm to some extent: we obviously do not know the answer to every question, for the simple reason that education does not take place in isolation. The evidence is out there – but it’s neither definite nor complete. As an example, the concept of affect—which refers to students’ experience of emotion—is gaining growing recognition as an essential component in teaching and learning but still holds many secrets for both researchers and seasoned teachers (Mayer, 2017). Therefore, some consideration was given in this issue to educational outcomes beyond retention of basic declarative and procedural knowledge. Several articles explore pedagogies such as playful learning in the early years (by Sarah Seleznyov), reading for pleasure (by Alice Reedy) and the crossroads between coaching and direct instruction (by Ed Cope and Chris Cushion). Given the broad range of content areas represented in this issue, our readers should not be surprised that the educational outcomes in this issue differ greatly.

The UK occupies a pole position worldwide with educational undertakings, supporting the implementation of structured evidence informed education. We think of influential research centres such as the Education Endowment Foundation, or professional learning communities such as the Chartered College of Teaching and the ever increasing researchED community. A number of articles in this issue zoom in on this fascinating but complex interplay between research and practice. Andrew Davis elaborates on classroom research, Richard Churches and colleagues shine a light on teacher led randomized controlled trials, and Lorne Stefanini and Jenny Griffiths address some challenges when implementing an evidence-informed approach to education.

This issue might be what David Daniel (2012) described as a targeted investment in translational research: with this issue CCT supports the development of pedagogical approaches with the goal of understanding how, when and under what constraints to apply best-evidence strategies in relevant educational contexts. Readers will find a multiplicity of approaches in the current issue, all aimed at revealing how to inform your pedagogy with the best available evidence. We hope that this issue can help you to make more and better evidence-informed decisions.

Enjoy, learn, and use the content to reflect upon and improve both your teaching and your students learning!

References

Alexander, R. (2004). Still no pedagogy? Principle, pragmatism and compliance in primary education. Cambridge Journal of Education, 34, 7–33.

Alexander, R. (2008). Essays on pedagogy. London: Routledge.

Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25, 551-575.

Daniel, D. B. (2012). Promising principles: Translating the science of learning to educational practice. Journal of Applied Research in Memory and Cognition, 1, 251–253

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.

Kirschner, P. A., & van Merriënboer, J. J. (2013). Do learners really know best? Urban legends in education. Educational psychologist, 48, 169-183.

Mayer, R. E. (2004). Teaching of subject matter. In S. T. Fiske (Ed.), Annual Review of Psychology (Vol. 55, pp. 715–744). Palo Alto, CA: Annual Reviews.

Mayer, R. E. (2018). Educational psychology’s past and future contributions to the science of learning, science of instruction, and science of assessment. Journal of Educational Psychology, 110, 174–179.

Neelen, M. & Kirschner, P. A. (2020). Evidence-informed learning design: Creating training to improve performance. London, UK: Kogan Page.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181–210.

Roediger, H.,& Pyc, M. (2012). Inexpensive techniques to improve education: Applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition. 1, 242–248.

Rousseau, D. M., & Gunia, B. C. (2016). Evidence-based practice: The psychology of EBP implementation. Annual Review of Psychology, 67, 667-692.

Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t, Clinical Orthopaedics and Related Research, 455, 3‑5.

Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cognitive Science, 12, 275–285.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s