Predicting Tech Trends in Education is Hard, Especially about the Future

Paul A. Kirschner & Mirjam Neelen


In the last few months, two “predictive” documents found their way into our hands. The first one is the 2016 NMC[1]/CoSN[2] Horizon report for elementary and secondary education and the second is the SURF Trend report 2016: How technological trends enable customised education. Both are very interesting and well-written reports. However they’re also a bit tricky in that they’re not really underpinned by concrete evidence from the educational sciences and therefore, their predictions are in our opinion a bit like reading tea leaves: They’re very visible, but what do they mean?

As a preamble to discussing the SURF Trend report 2016 an aside to frame some background. Last year, Paul Kirschner presented a keynote at the 6th International Conference on Learning Analytics and Knowledge (LAK16). The keynote was had the title Learning Analytics: Utopia or Dystopia (You can watch the (highly recommended! :)) YouTube video here). Paul’s plea was that learning analytics on one hand have the potential to bring a lot of good stuff, painting a picture of five utopian futures):

  • Creating predictive models to help learners and instructors
  • Guiding learners to effective and efficient study strategies
  • Providing just-in-time recommendations and/or interventions
  • Providing real time feedback
  • Creating better learning environments by providing institutions information about the quality of its courses, teachers, use of materials, etc.

On the other hand, learning analytics can also do a lot of harm (Paul also discussed five dystopian futures). These futures include:

  • Myopically viewing education as a simple process that is easily modelled
  • Basing decisions and interventions on “data rich but theory poor” studies
  • Basing decisions and interventions on wrong or even invalid variables
  • Basing decisions and interventions on correlations and not causality
  • Due to the previous four, achieving unintended and unwanted effects such as pigeonholing, profiling, and stereotyping which stigmatise and work counterproductively

These dystopian futures arise because schools, teachers, educational technologists, programmers, sponsors, stakeholders, data scientists, and so forth seem to overlook one key person when trying to implement learning analytics, namely – drumroll please – an educational or learning scientist!

Let’s take a look at George Siemens’ Learning Analytics Model, in which the Data Team members are highlighted via the red box.


Siemens Learning Analytics Model (From Siemens (2013, p. 13))[1]

Even when we used our binoculars / microscopes, we weren’t able to spot an educational/learning scientist anywhere in the box. As a result, many learning analytics applications lack a solid learning theory as foundation. In other words, the Data Team members – as listed in Siemens’ model – won’t really know which student variables are essential for a good functioning learning analytics model. They also lack the knowledge of which additional variables, for example age or the actual setup of the educational environment, might influence the results. Because of this lack of theoretical framework, they’re also not necessarily aware if the results that they find can be applied elsewhere. And finally, due to their lack of learning scientific training, when something is found – most often a correlation (A and B have something to do with each other) – it is seen as if it is a causal relationship (A is caused by B). In other words, for learning purposes, the results are actually close to worthless. Look at it this way: While there is probably a 100% correlation between drinking milk and heroin addiction (all heroin addicts drank milk when they were children) nobody would say that this is a causal relationship (drinking milk leads to heroin addiction) and suggest that we ban milk for children. For some other great and funny spurious correlations, go here .

The problem here is that, partially due to a lack of a proper theoretical foundation (with hypotheses based on the theory and variables for collection and study chosen on the same basis) current work on learning analytics, led or initiated by people such as education or training managers, school directors, sponsors, stakeholders, data scientists, and even researchers just use the data that’s available to them or gather new data that is easy to obtain, such as login data, internet search behaviours, test results, and so forth. They can’t and don’t look for data that would actually be useful and needed and which could confirm their hypotheses[2]. The current learning analytics picture is painted by this cartoon:


This is known as the streetlight effect: An observational bias characterised by searching for something wherever it’s easiest to look instead of where it’s most required to find.

What does all this have to do with SURF’s 2016 trend report? The report, which is indeed of high quality and an interesting read is also guilty of the challenges illustrated above. In our opinion, the report discusses trends without worrying too much about the educational sciences aspects of the learning technologies discussed. The report discusses technological trends that enable or facilitate flexible and personalised education. These trends bring three common “education innovation” threads to the surface:

  • Didactic enrichment: technologies that make education more interesting, motivating, and just better.
  • Flexibility: technologies that enable boundaries between educational formats, programmes, and institutions to diminish.
  • More adaptive education that tailors learning experiences to the learner on various levels.

From the sum of these trends emerges a perspective on future education, according to the SURF report authors.


Each of the thirteen technologies discussed in the report has been described very clearly and is accompanied by an introductory future scenario; an explanation of the technology, examples of current implementations, how the technology can contribute to tailored-to-your-needs education, and so forth. However, these scenarios lack a description of the educational scientific conditions (and that doesn’t mean practical conditions!) to underpin the decision to use the technology. When would it work well and for whom? What learner characteristics determine if a technology would have certain effects or not?

Let’s take virtual reality (VR) as an example. The authors describe that VR offers an enrichment of education because learners can determine how, where, and when they will use VR for their own learning process. The question however is, is this desirable? Is the learner really able to do this (i.e., determine what’s best for her/his own learning process) and if so, what type of learner are we talking about here (e.g., first year student, PhD candidate, post-grad, …)? What type of learning support does (s)he need? How do you take into account that almost all people (and especially learners attending school) struggle to identify what they are able and unable to do (think Dunning-Kruger effect).

Also, deciding for yourself on how to use VR for your own learning process is usually, if not always, based on your personal preference and not on what would be the most effective approach to learning. We have known for ages that how learners organise their learning process is not the same as (and often times in violation with) the approach that helps you to learn best. For a simple comparison, while most people prefer food that is sweet, salty, and/or fatty, few will say that eating that combination of foods will be beneficial to their weight and general health.

All of these questions and more need to be answered before you can decide if and how you would implement various technologies for personalised learning in an effective, efficient, and satisfying manner. If you don’t answer, let alone ask these questions, our question is then: What the heck are your predictions on the future based on and what is their actual value?


A second problem is that predicting the future is very problematic in general. Take, for example the Horizon report from New Media Consortium / Consortium for School Networking. In 2010 it stated that cloud computing would be implemented within one year. The same prediction is made in the 2011, 2012, 2013, and 2014 reports. In other words, since 2010 the Horizon Report K-12 Edition has been predicting every year that within one year elementary and secondary education will have total access to all digital documents and programmes that are needed in education from each device at home and at school (e.g., TV, smartphone, tablet, laptop, desktop computer, and so forth).


Collaborative learning had a 1-year horizon in 2009, while it seems to have a 3-5 year horizon in 2016! Sounds like the predictions of when the apocalypse will come.

This brings us back to the title of this post. It has been said that baseball player Yogi Berra – the American version of Johan Cruijff when it comes to “wise” expressions – once answered an interviewer’s question in the following way: “It’s tough to make predictions, especially about the future.” The same seems to be the case for education!



[1] New Media Consortium (

[2] CoSN (Consortium for School Networking;

[3] Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57, 1380-1400. doi:10.1177/0002764213498851


[1] Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist, 57(,

1380-1400. doi: 10.1177/0002764213498851

[2] Sadly enough, it is our experience that most learning analytics research don’t even have hypotheses as to what the researchers hope to find. David Williamson Shaffer compares good learning analytics research to what a mining company does. The company just doesn’t dig anywhere hoping to find oil, coal, metal or whatever. The company makes use of geological theories, understands the relationship between plate tectonics and what these movements create, etcetera.