Mirjam Neelen & Paul A. Kirschner
A while back, we discussed why some things are easy to learn and some are difficult by looking at learning through an evolutionary lens (see blog here), based on David Geary’s work. In this blog, we’ll look at some of the same questions through a different lens; a conceptual change lens. We now explore why some concepts are more difficult to learn than others and discuss how we can support learners when learning new concepts.
Here we focus on understanding and tackling misconceptions (also referred to as naïve conceptions) when learning new concepts (Chi, Slotta, & De Leeuw, 1994). Misconceptions form when prior knowledge conflicts with to-be-learned concepts, or, as Stella Vosniadou (1994) writes, as the “individuals’ attempts to assimilate new information into existing conceptual structures that contain information contradictory to the scientific view” (p. 45). A typical example of this is the misconception held by many people – even high school physics students (Pablico, 2010) – that there’s a propelling force at work on a ball that you’ve thrown in the air. While, the only forces at work are gravitational and frictional forces that do the exact opposite. Or the misconception that a heavy object, like a hammer, falls more quickly than a light object (a feather) in a vacuum.
There are also interesting examples of children’s misconceptions of the shape of the earth. The figure below shows a fascinating overview of children’s mental models of the shape of the earth (Vosniadou & Brewer, 1992). Mental models basically are representations that we can mentally manipulate to explain phenomena to ourselves and to make predictions; that is, how we see and understand things.
(From Vosniadou, 1994, interview with Venica, 3rd grade, who has said the earth is round but that it has an end/edge…
I = Interviewer, V = Venica
I: Can people fall off the end or edge of the earth?
I: Why wouldn’t they fall off?
V: Because they’re inside the earth
I: What do you mean inside?
V: They don’t fall, they have sidewalks, things down like on the bottom.
I: Is the earth round like a ball or round like a pancake?
V: Round like a ball.
I: When you say that they live inside the earth, do you mean they live inside the ball?
V: Inside the ball. In the middle of it.
Patti Shank explains in her book ‘Practice and Feedback for Deeper Learning’ (2017) why it’s critical to spot the misconceptions that we have when we learn new things. It’s completely normal to have misconceptions when learning new concepts. Sometimes, we see or experience something and then draw conclusions based upon those observations without testing them or having the necessary prerequisite knowledge to truly understand them. A good 21st century example was Prensky’s (mis)conception of digital natives who think differently than preceding generations and its related (mis)conception that they can multitask.
We also expect different people to have different types of misconceptions, as we all have different prior knowledge to which we try to connect the new concept(s).
But how do we, as teachers, instructional designers, learning experience designers, or whatever learning professionals, figure out what misconceptions to expect when we begin designing? Our first reflex is to ask the experts. Shank, however, warns that we need to look beyond the experts, as they often forget typical misconceptions due to what’s known as the curse of expertise. She suggests, in addition to asking experts, to also ask both high performers and novices. This doesn’t mean that we should ask them “What are your misconceptions?” (although high performers might be able to recall some of their misconceptions when they were novices). It means that we can ask them to explain certain concepts. Then, we could go back to the experts and check whether the explanations contain misconceptions and what they are. However, even this isn’t enough. Ideally, we should observe (potential) learners. For example, when they’re engaged in carrying out a task (e.g., on the job, in the classroom), in a training context. Also, we need to find a way to categorise the misconceptions that we come across so that we can determine how to adapt the learning experience to help the learner correct their misconceptions. Categorising can be helpful because, for example, some misconceptions can have the same origin and when we discover that, we can ‘tackle’ them at their root. Also, we can’t necessarily address each misconception individually, so understanding the categories is helpful when designing learning experiences. Of course, another reason why categorising misconceptions is important, is because different misconceptions might need different instructional approaches.
What it comes down to is that, as learning professionals, we need to understand how people make sense of new concepts, how misconceptions form, and how to design instruction to tackle them. Multiple studies by Michelene Chi (and colleagues) and the work of Stella Vosniadou (1994), attempt to unravel types of misconceptions and discuss how to design instruction to tackle them. Chi and colleagues start with ontologies (a set of concepts and categories in a subject area or domain that shows their properties and the relations between them) and the role that they play in making sense of new concepts.
Sense-Making Through Ontologies
Chi’s assumption is that all entities in the world belong to different ‘ontological categories’ – or ‘trees’ -, for example, ENTITIES, PROCESSES, and MENTAL STATES (see below).
Distinct ontological trees (Chi, 2008)
Each category has a hierarchy of embedded subcategories; things that are lower in a category ‘belong’ to the category above them (e.g., birds, mammals, and reptiles are animals). Basically, ontologies help us make sense of concepts, and thus, of the world. To put it a bit ‘philosophically’: Our conceptions of the world’s entities correspond to our ontological distinctions. We know that people are well able to demonstrate the distinctness of ontological categories. For example, they can perfectly judge if a concept is sensible or not.
- A canary is an hour-long.
- A canary is purple.
The first one is clearly nonsensical, and people are easily able to spot that. In Chi’s approach, it’s nonsensical, because there’s a ‘category’ mistake. ‘An hour long’ describes an attribute (time duration) that can never go with, in this case the ontological category, ENTITIES. The second might be judged wrong, yet sensible, as it only has an incorrect attribute (the wrong colour), but of course colour can go with ENTITY.
Funnily enough, Chi et al’s article used “a canary is blue” as an example of an incorrect yet sensible ‘wrong attribute only’ example. However… blue canaries do exist, so we changed it to purple to make the point 😊.
Categorising – the process of identifying or assigning a concept to an ontological category to which it belongs – is an important learning mechanism. The more and better the categories, the richer our mental model is. Categorising – in itself – is a way for us to build mental models (Van den Bogaart et al., 2016).
What happens when we learn new concepts
When we need to learn something that’s new to us, we sometimes have no prior knowledge at all (although we might have some related knowledge). In that case, prior knowledge is missing, and learning consists of adding the knowledge to our existing conceptual structures, even if these are ‘wrong’ or incompatible (see our recent blog on the power of prior knowledge).
Vosniadou (1994) suggests that, in childhood, we develop naïve frameworks, based on our experiences with the world around us and how we interpret it (this goes back to what Geary calls biologically primary learning and the various folk systems). These frameworks consist of presuppositions (things we assume beforehand) and also of various specific theories that we ‘make up’ to make sense out of the new concepts. The underlying assumptions of these ‘naïve’ frameworks later limit us when we interpret new information. This is because we’re seemingly programmed to create relatively coherent explanations for ourselves (we’re biased to ‘connect the dots’ so as to explain things to make sense), based on everyday experience (and then years of confirming our own mental models) so it’s hard to marry the new information with our existing models.
Similarly, when we do have some correct prior knowledge about the to-be-learned concept, this knowledge is usually incomplete. Here, learning consists of ‘filling the gaps’. In both cases, the missing or naïve knowledge can be relatively easily revised or removed through ‘enriching’ instruction. In the words of Jean Piaget, the new information is assimilated into already existing knowledge schemata/models ór the schemata/models are accommodated based upon the new information. Usually, although these ‘preconceptions’ are wrong, the shift takes place within the same ontology or hierarchy, according to Chi (2008). Chi calls this ‘conceptual reorganisation’ and that, again, is relatively easy to achieve.
In contrast, when we have prior knowledge that conflicts with the to-be-learned concepts, we’re dealing with misconceptions. To ‘shift’ these, we need to make a conceptual change (in contrast to ‘reorganisation’). Chi defines conceptual change as the processes of removing misconceptions.
According to her (2008), when we’re dealing with a misconception, we’ve organised the concept in the wrong ontological category. For example, an individual might assign the concept of ‘heat’ to ‘matter’ (a sensation of hotness) instead of ‘process’ (the speed at which molecules move: the greater the speed, the hotter the molecules feel). So, the learner incorrectly conceives heat to be the same as ‘hot stuff’ (=ENTITY), instead it’s the speed of movement of molecules (=PROCESS). When this initial categorisation is flawed, conceptual change (reassigning the concept ‘heat’ to PROCESS instead of ENTITY) needs to take place. Vosniadou (1994) warns that the ontological categories that Chi et al (1994) use don’t explain why reassigning a concept from one category to another is so difficult. For example, so asks Vosniadou, why is it so difficult to reassign the concept of ‘whale’ from the category ‘fish’ to ‘mammal’? Or, in our example above, why is it so difficult to reassign the concept of heat to PROCESS instead of ENTITY?
We know that misconceptions (in contrast to preconceptions) are indeed tough to tackle. They’re very persistent, even when confronted with strong evidence to the contrary and/or instruction (Chi & Roscoe, 2002). If you want to know more about why misconceptions are so stubborn, we strongly recommend Cook and Lewandowsky’s (2012) Debunking Handbook. You can also read our blog ‘Why Myths Are Like Zombies’. However, both Vosniadou (1994) and Chi (2008) take it a step further when trying to answer the question why this type of incorrect knowledge is so resistant to change. Why is it so hard for us to part with our conceptions of the world (e.g., our beliefs) even in the face of evidence to the contrary. Chi suggests that the level of resistance depends on our representation of knowledge at three different levels: a) our individual beliefs, b) our mental models, and c) the categories. The question, thus, is what these levels tell us about the conflict between misconceived and to-be-learned knowledge and how instruction should be designed to tackle such conflicts. In our next blog, we’ll look at some examples to make all this more concrete.
Chi, M.T.H. (2008). Three types of conceptual change: Belief revision, mental model transformation, and categorical shift. In S. Vosniadou (Ed.), Handbook of research on conceptual change (pp. 61-82). Hillsdale, NJ: Erlbaum.
Chi, M. T., Slotta, J. D., & De Leeuw, N. (1994). From things to processes: A theory of conceptual change for learning science concepts. Learning and Instruction, 4(1), 27-43.
Chi, M.T. & Roscoe, R. D., 2002. The processes and challenges of conceptual change. In M. Limón & L. Mason (Eds.), Reconsidering conceptual change: Issues in theory and practice (pp. 3-27). Dordrecht, The Netherlands: Kluwer.
Cook, J., Lewandowsky, S. (2011), The Debunking Handbook. St. Lucia, Australia: University of Queensland. November 5. ISBN 978-0-646-56812-6. Retrieved from https://www.skepticalscience.com/Debunking-Handbook-now-freely-available-download.html
Pablico, J. R. (2010). Misconceptions on force and gravity among high school students (Unpublished Master’s Thesis), Louisiana State University, Pineville, USA.
Shank, P. (2017). Practice and feedback for deeper learning. Denver: Learning Peaks LLC.
Van den Bogaart, A. C. Bilderbeek, R. J. C., Schaap, H., Hummel, H. G. K., & Kirschner, P. A., (2016). A computer-supported method to reveal and assess Personal Professional Theories in vocational education. Technology, Pedagogy and Education, 5, 613-629. http://dx.doi.org/10.1080/1475939X.2015.1129986
Vosniadou, S. (1994). Capturing and modeling the process of conceptual change. Learning and instruction, 4(1), 45-69.
 Ontology is usually seen as a philosophical term, but it is also used in information and computer science to mean a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities that substantiate one, many or all domains of discourse. (https://en.wikipedia.org/wiki/Ontology_(information_science))
 A mental model is an internal representation of a concept, or an interrelated system of concepts, that corresponds in some way to the external structure that it represents (Chi, 2008)