Mirjam Neelen & Paul A. Kirschner
As Richard Clark points out in his most recent video interview with Guy Wallace: We, as learning designers, are notoriously weak when it comes to the analysis phase of the learning design process. Specifically, he’s referring to the fact that we don’t use cognitive task analysis (CTA) as often as we should. It’s an overlooked and for some an unknown tool. CTA uses various strategies to capture both the explicit and implicit knowledge that experts use to carry out complex tasks. We do this in order to harvest information about the knowledge, thought processes, and goal structures that are needed to carry out a task and that underlie observable task performance (Chipman, Schraagen, & Shalin, 2000).
There are many possible reasons why we’re not using CTA as much as we should and we could fill multiple blogs with the overall state of our profession (oh, actually, we’ve already done that a bit here, here, and here), but the fact of the matter is that we should use it. Here’s why.
“Research evidence indicates that the accurate identification of experts’ cognitive processes can be adapted into training materials that are substantially more effective than those developed through other means” (Clark et al, 2008, p 8). They go on to say (it’s a bit like stating the obvious but it’s good to remind ourselves) that when content is inaccurate or incomplete, all instruction based on that content will be flawed.
As learning professionals, it’s our job to help close the gap between performance problems and desired performance; that is figure out the performance discrepancy. Because performance discrepancies can have many causes, we first need to determine which ones actually require training/learning (e.g., Wallace (2019), see the EPPI Fishbone Diagram below – Awareness Knowledge Skills indicate a training need).

Note: When we say ‘training’, we mean training as defined by Salas and colleagues (2012), where the goal is a sustainable change in behaviour and cognition, involving “… planned and systematic activities … to learn targeted knowledge, skills, and attitudes through instruction, demonstration, practice, and timely diagnostic feedback on their performance.” (p 77).
Let’s say we have a performance problem on our hands, (e.g., too many ladder accidents resulting in employee injuries – example inspired by Shank, 2017) and we know that the solution is ‘training’ because people don’t know the differences between ladder types and when to use which ones. They also don’t know how to spot potential hazards and what to do to prevent or ‘fix’ them. All these things they of course need to know/do to do their jobs well. First, learners needs to know what safe ladder use looks like. Then, they need to learn how ‘expert ladder users’ know which ladder to use in what context, how to inspect the ladder to make sure that it’s safe, how to spot hazardous situations, how to make them safe again. As learning designers, we need to dig into the expert’s brain to determine how they do what they do with a ladder, safely.
This is where CTA comes in. It helps us understand the environmental and contextual clues that experts and/or master performers[1] use to make certain decisions and what conceptual and procedural knowledge they need to possess to be able to make those decisions.
Although usually we all work with ‘subject matter experts’ (SMEs), we often let them lead the way without guiding or questioning them. Clark points out that this is way too risky because experts make around 70% (!) of their decisions unconsciously. This is of course great for them and the work they’re doing (think about how many lives would be lost if expert surgeons had to consciously think about everything they do, or even expert drivers!), but it’s a disaster for learners because they need to learn what decisions to make at which point and why!
In fact, we also often fall into another trap as well. Jeroen van Merriënboer recently reminded Mirjam in an interview (not publicly available) on this classic mistake (he calls it a ‘meta myth’), namely that we mix up goals and methods. Examples of such meta myths are that if the goal is that learners need to learn how to solve problems, then the method we choose is ‘solving problems’ or if the goal is that learners are able to regulate their own learning, then the method is ‘regulating their own learning’, and so forth. Paul spoke of this back in 1991 in his PhD thesis (and published it in 2009) where he stated that there’s a problem with using what is known as the epistemology of the expert (what the expert does when trying to unearth new knowledge; think about a physicist does when doing physics research) with the pedagogy of the learner (what the learner needs to do when learning in the subject domain; think about a physics student learning physics). As long as we’re trapped in this faulty way of thinking, we might think that we don’t need analysis at all.
Think again, because this is a fundamentally flawed way to approach learning and instruction. Methods (or means) are NOT the same as goals (or ends). If, for example, we find it important that people learn how to use ladders safely in multiple contexts, we much explicitly teach them what to look for, show them examples of safe ladder use in various contexts (explain the decisions and the reasoning behind the decisions step by step), scaffold their practice so they can achieve success safely, and so forth. We can only provide effective, efficient, and enjoyable training if we understand:
- the sequence of tasks,
- the range of tasks from simple to complex,
- the possible variety within one ‘task class’ (e.g., simple, intermediate, complex),
- how the expert/master performer decides what to do (this is really difficult to capture, because often this is automated/tacit knowledge), and
- the mistakes that can be expected
This is where CTA comes in[2].
The CTA process
When done well, the outcome of CTA is a description of the actual performance objectives, conceptual and procedural knowledge for the performance, performance standards, and tools as used by experts as they perform the task (Clark et al., 2008).
There are many different CTA methods (Crandall, Klein, & Hoffman, 2006 distinguish four main categories (interviews, observation, textual, and psychometric) and within those categories, there are around 75 (!) different methods). However, in the most dominant CTA methods there are usually five[3] common steps carried out in the following sequence (Clark et al., 2008):
PHASE | OUTPUT EXAMPLES |
Collect preliminary knowledge – document analysis, observation, interviews determine sequence of tasks, familiarise yourself with the knowledge domain, identify experts/master performers[4] | Knowledge types and structures for the task to be performed |
Identify knowledge representations – identify sub tasks, including types of knowledge required to perform them | Learning hierarchy / Skills hierarchy (see image for example); e.g., idea is that you can only learn rules if you’ve mastered prerequisite concepts necessary to learn the rules |
Apply focused knowledge elicitation methods – capture conditions and cognitive processes necessary for complex problem solving (help experts elicit the actual knowledge identified in the previous phase) for the sequence of authentic tasks (ranging from routine to highly complex) | Transcript with: steps necessary to complete the task,decisions that have to be made (incl criteria),concepts, processes, and principlesconditions under which to start the correct procedureetc etc |
Analyse and verify data acquired – Experts verify, refine, and revise their peer experts’ transcripts for accuracy and efficiency | Final ‘approved’ document that provides information for the instructional design process |
Format results for intended application – results must be ‘translated’ into models | Models that reveal underlying skills, mental models, and problem-solving strategies, case studies |
Learning Hierarchy Example (from Demirel et al., 2016)):

The Reality
We can hear you think: “Right, who has time for this?” Here’s a twofold reality check. One is, you’re right, it takes time. Dick Clark, who has done LOTS of these, shares in the video that the time investment depends on the complexity of the task and can range anywhere from around 20 hours for the expert and 60 hours for the analyst to 130 hours for the expert vs 280 for the analyst. But of course, the time investment itself isn’t the point. The question is whether the costs outweigh the benefits. Yes, doing proper CTA adds around 80% time to the initial analysis[5] but – second reality check – if you DON’T do it there’s a 70% performance deficit after the training.
What it comes down to is this. Only go for training if you really need training because designing effective, efficient, and enjoyable training requires a substantial investment in time and money. If performance matters and you need training, then you need to do it right. And to do it right, you need to do a CTA. It’s time we make this gem shine.
References
Chipman, S. F., Schraagen, J. M., & Shalin, V. L. (2000). Introduction to cognitive task analysis. In J. M Schraagen, S. F. Chipman & V. J. Shute (Eds.), Cognitive Task Analysis (pp. 3-23). Mahwah, NJ: Lawrence Erlbaum Associates.
Clark, R. E., Feldon, D., van Merriënboer, J., Yates, K, and Early, S. (2008). Cognitive Task Analysis. In Spector, J. M., Merrill, M. D., van Merriënboer, J. J. G., & Driscoll, M. P. (Eds.) Handbook of research on educational communications and technology (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
Crandall, B., Klein, G., Klein, G. A., & Hoffman, R. R. (2006). Working minds: A practitioner’s guide to cognitive task analysis. Cambridge: MIT Press.
Demirel, D., Butler, K. L., Halic, T., Sankaranarayanan, G., Spindler, D., Cao, C., … & deMoya, M. A. (2016). A hierarchical task analysis of cricothyroidotomy procedure for a virtual airway skills trainer simulator. The American Journal of Surgery, 212(3), 475-484.
Kirschner, P. A. (2009). Epistemology or pedagogy, that is the question. In S. Tobias & T. M. Duffy. Constructivist instruction: Success or failure? (pp. 144-157). New York: Routledge.
Shank, P. (2017). Practice and feedback for deeper learning : 26 evidence-based and easy-to-apply tactics that promote deeper learning and application (Deep Learning Series Book 2). CreateSpace Independent Publishing Platform
Wallace, G., (2019, February 25). The EPPI fishbone framework. Pursuing Performance. https://eppic.biz/2019/02/25/td-pi-the-eppi-fishbone-framework/
Wallace, G., (2020, July 1). HPT Video: Richard E. Clark, Ed.D. Retrieved from https://eppic.biz/2020/07/01/hpt-video-richard-e-clark-ed-d/
[1] The distinction is that experts might or might not PERFORM the job (they don’t always do!) and a master performer might or might not be an expert, but they are DOING THE JOB we need to look into.
[2] Sometimes CTA isn’t necessary, for example, if well-designed instructional materials are already available and the training is limited to re-organizing those materials. But if instructional materials need to be designed and developed from scratch, it is important to perform CTAs where the cognitive strategies which proficient task performers use to solve problems in the domain are required (so called Systematic Approaches to Problem solving (SAPs)).
[3] In the HPT interview from June 2020, Clark mentions a 6th step which is around what equipment experts use, what standards are in place, and concrete examples
[4] Another note on experts/master performers: Of course, specific criteria on how to select experts varies depending on the circumstances, but you need to make sure that they have a solid record of consistently achieving success (objective evidence if possible!) completing the tasks that need to be analysed.
[5] We’d say that after that, analysis is ongoing when you continue to collaborate closely with the experts. In our experience, when moving into the design phase and also when creating instructional materials, the ‘analysis’ is ongoing because the experts will continue to recognise gaps or things they actually do quite differently than they initially said, and so forth)