Cognitive Load Theory Unsuitable for Accelerated Expertise?

A response to “Accelerated Expertise: Training for High Proficiency in a Complex World”

 Mirjam Neelen & Paul A. Kirschner

This blog started as a review of the book “Accelerated Expertise”[1], by Robert Hoffman and colleagues by Mirjam and evolved into a response from both of us to the authors’ claim that cognitive load theory (CLT) is unsuitable for the concept of accelerated expertise. We’ll start by explaining what the authors say that accelerated expertise is. Then we’ll discuss some of the learning strategies recommended by them to accelerate expertise development in organisations and then we’ll move on to discussing the statements that the authors make with regards to CLT and where we think they miss the point.

What is accelerated expertise?

While the authors use the terms accelerated learning/proficiency/expertise the book is basically about accelerating learning in order to first achieve high proficiency and then expertise.

Accelerated expertise development can be tackled from different angles, such as from the perspective of rapidised training – training people to achieve some minimal level of proficiency at a faster than usual rate or accelerated proficiency – getting individuals to achieve high levels of proficiency at a rate faster than ordinary.
acceleration [2]

Fast-forward to expertise?

The authors point out that there’s a pressing societal need for accelerated expertise, for example, because organisations face upcoming retirement of senior experts or because organisations have become more sociotechnical[3]. In these complex work systems, “expertise is a must because decisions are high stake and high risk” (p. 67). Also, workers must be trained faster in order to meet this need to be able to deal with complexity.

But… is it possible to become an expert quickly? The short answer is NO. It’s incredibly hard to achieve high proficiency, let alone become an expert. So, where to start? The authors recommend a ton of proven evidence-informed strategies that focus on practice, feedback, transfer, and retention. These, however, are by no means specific for accelerated learning. They are traditional learning approaches that have proven to be effective. The authors also point out that we need to keep searching for better and innovative learning methods which also take pace of change in technology and goals into account in order to be able to accelerate proficiency. A note here: Maybe they should read Ten Steps to Complex Learning which is a programme for achieving what they want!

The authors emphasise that the goal of accelerated proficiency should always be to make learning more effective (they don’t’ mention more efficient, interestingly, though their rapidised training is actually aimed at training workers faster), not easier. That makes sense and even sounds a bit obvious (who learns from easy stuff?). No matter the level of the learner, there must be problems that represent desirable difficulty[4]. An expert needs to particularly be able to handle tough and rare cases. Generating a corpus of such tough and rare cases for training purposes can be scaled to stretching of skill but also to be used in unlearning experiences or to reveal knowledge shields[5]. Time or experience compression refers to packing more varieties of experiences in a training, which sounds similar to variable practice or interleaving. For example, experiences that might happen rarely in real life while proficiency in a domain might require those rare experiences. In this light, it’s also critical to address the complexity and provide sufficient support at the same time (again this sounds like the design strategy presented in Ten Steps to Complex Learning). Another way of compressing time is to truncate events that might be of less importance; they can be shortened compared to their duration in real life. Of course simulations are an option for this in certain contexts.

According to Hoffman and colleagues, problem- and scenario-based learning as well as engagement simulations[6] are the way to go in the context of jobs that are gaining in complexity and unpredictability. This is where I (Mirjam) started to get totally distracted when reading the book because the authors dismiss CLT altogether as being an unsuitable theory for accelerated expertise. The debate between proponents and opponents of PBL is as old as the hills (for example, see our blog on inquiry-based learning and also see Kirschner et al., 2006 and Hmelo-Silver et al., (2007)) but the claims that Hoffman and colleagues make in the context of accelerated expertise were new to me and made me raise my eyebrows and frown alternatingly.

CLT in the context of accelerated expertise; unsuitable or actually quite effective?

The authors in particular dismiss decreasing extraneous cognitive load as an effective design approach for accelerated expertise development. They argue that learners need to be prepared to deal with the distractions that complex cognitive work systems are typified by. They also state that learning under ‘easy conditions’ (their words) might lead to learning faster in the short-term, however “introducing desirable sources of difficulty has been shown to … promote retention and transfer” (p 85) long term (this is called ‘increasing complexity of task classes’ in Ten Steps to Complex Learning). First of all, nowhere in CLT anyone states that learning for novices should happen under ‘easy conditions’. Managing extraneous cognitive load means that you design learning experiences so that learners can focus on the task at hand (e.g., schema construction or automation), it doesn’t mean ‘make things as easy as possible’. This is a commonly made mistake by people who have heard of CLT but really don’t understand it! In Dutch we have the saying – roughly translated – They have heard the bell ring, but don’t know where the clapper is! The goal is to keep the load within the limits of what humans can process and NOT to reduce load. Effortful learning is a necessity, also for CLT, if deep learning is what is being aimed at. Also, of course, learners need to be able to deal with distractions but you can only do so if you understand where your focus should be. In other words, throwing the learner into the deep end will be both ineffective and inefficient and will probably also lead to frustration and demotivation; four things that no learning programme should aspire to achieve. Having to deal with all the “real world” complexities requires the learner to be able to sort out relevant from irrelevant stimuli or what Rich Mayer refers to as ‘seductive details’ (also see medical expertise research as conducted in Van Merriënboer’s research group in Maastricht). Otherwise, you might not even be able to distinguish a distraction from something that needs your attention!

The authors also state that worked examples and guidance fading (support and guidance (scaffolding) fades as proficiency increases) do “not accord with what is known about expertise: the achievement of high proficiency entails increasing the task difficulty [this is correct and aligns with working from less to more complex task classes], not the good of decreasing it, and providing mentoring support (guidance), not withdrawing it” (p 85).

It’s true that the worked example effect (WEE; which by the way is only one example of a suggested approach by CLT) has mainly been studied in well-structured tasks and not so much in ill-structured tasks (which take longer to master). However, there is some initial research that would also be applicable to the context of accelerated expertise. For example Nievelstein et al., 2013 found a WEE for advanced learners in less structured tasks (reasoning about legal cases)[7] so we don’t think it’s fair to dismiss worked examples in the context of accelerated expertise altogether.

Also, we’re not sure why the authors think that guidance fading in CLT is simply “removing scaffolding”. They themselves state that that the most effective guidance changes its nature and focus and is not simply a matter of removing the scaffolding but they don’t acknowledge the various types of guidance that CLT suggests, for example process worksheets, guiding questions, prompting, performance constraints, etcetera (again, see Ten Steps to Complex Learning).

In general, the expertise reversal effect refers to the notion that the most effective learning methods for novice learners are often ineffective for comparatively more expert learners. We repeat: different learning methods are effective for novices than for experts and vice versa. This means, for example that for novices you might use a uniform series of conventional tasks while for more expert learners you use varying series of tasks with guidance (that you then might gradually decrease) (Van Merrienboer & Sweller, 2010). The key is not just decreasing the support and guidance, it’s about designing the tasks differently for novice learners than for expert learners and adapting the support and guidance accordingly. CLT actually seems to totally agree with Hofmann et al (increasing task complexity when increasing proficiency and changing the focus and nature of the guidance as required). In other words, Hofmann et al. have heard the bells chime but really haven’t found the clapper.


It’s honestly surprising that the authors, who have done a thorough job writing a strong, informative book, based on (as far as I, Mirjam can judge) good research, paint such an incorrect and simplified picture of CLT. Based on their strong examples and case studies, it seems that they don’t really understand what CLT is because there’s actually huge overlap between CLT and what they’re promoting as effective approaches to accelerated expertise development. We’re really sorry, but we feel the authors set us up for this one… Perhaps they need a strategy for accelerated proficiency in CLT!?


Blayney, P., Kalyuga, S., & Sweller, J., (2015). The impact of complexity on the expertise reversal effect: experimental evidence from testing accounting students. Educational Psychology: An International Journal of Experimental Educational Psychology, DOI:

10.1080/01443410.2015.1051949, 1-17

Hmelo-Silver, (C.E.), Duncan, R.G., & Chinn, C.A., (2007). Scaffolding and Achievement in Problem-Based and Inquiry Learning: A Response to Kirschner, Sweller, and Clark (2006). Educational Psychology, 42, 99-107. Retrieved from

Hoffman, R.R., Ward, P., Feltovich, P.J., DiBello, L., Fiore, S.M., & Andrews, D.H., (2014). Accelerated Expertise: Training for High Proficiency in a Complex World. New York: Psychology Press

Kirschner, P.A., Sweller, J., & Clark, R.E., (2006). Why Minimal Guidance During Instruction Does Not

Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and

Inquiry-Based Teaching. Educational Psychologist, 41, 75-86. Retrieved from

Nievelstein, F., Van Gog, T., Van Dijck, G., & Boshuizen, H.P.A., (2013). The worked example and expertise reversal effect in less structured tasks: Learning to reason about legal cases. Contemporary Educational Psychology, 38, 118-125

Van Merrienboer, J.J.G., & Kirschner, P.A., (2012). Ten Steps to Complex Learning: A Systematic Approach to Four-Component Instructional Design. Oxford: Routledge

Van Merrienboer, J.J.G., & Sweller, J., (2010). Cognitive load theory in health professional education: design principles and strategies. Medical Education, 44, 85-93Retrieved from

[1] My friend Patti Shank was so kind to send me, Mirjam, the book. We’re on a collaborative quest to figure out how to support expertise development in the workplace.


[3] Sociotechnical means that “the work involves collaborative mixes of multiple people and multiple machines” (p 2) and as a consequence jobs become more cognition-intensive.

[4] A desirable difficulty is a learning task that requires a considerable but desirable amount of effort, thereby improving long-term performance.

[5] This is the phenomenon that, when learners are confronted with evidence contrary to their knowledge or views, they manoeuvre mentally to rationalise their faulty beliefs and they refrain from fundamentally changing their views (p 30).

[6] This refers to simulations in an immersive and distributed context of distance learning and collaboration using the internet as well as the immersive context of large-scale live, interactive simulated environments.

[7] Nievelstein et al explain that a limitation of their study is that they didn’t conduct a transfer test case so only the shorter-term instructional effect has been tested.