ChatGPT – What teachers need to know

Mitte Schroeven, Wouter Buelens & Paul A. Kirschner, Thomas More University of Applied Sciences. Originally published here.

Figure by DALL.E-2

Artificial intelligence (AI) seemed to be all over the news these last few months. The OpenAI chatbot, ChatGPT, stunned everyone with the speed and accuracy with which it generates text, but at the same time, created cause for concern about the influence that such programs might have on the future of education. In this blog we’ll give you an overview of what this new tool is good at (and not so good at) and how you could deal with it in the classroom.

AI is out there, whether you like it or not

The spontaneous reaction of a former colleague to a demonstration of ChatGPT’s abilities was: “Oh no, do you think our students are using this to do their homework?” The answer to this question is most likely ‘yes’. We shouldn’t be naïve. They’re not crazy. If it saves them time, earns them grades and they get away with it, of course they will. We’ve spoken of this earlier as the discipulus economicus or calculating student, the education variant of the homo economicus.

So first and foremost, it is important to realise that tools like ChatGPT exist. Or, as the Dutch Kennisnet [Knowledge Network] advisor Wietse van Bruggen puts it: “This technology isn’t going back in the box. It’s out there”. The technology in itself is nothing new: text generative AI has been around for a while and is constantly developing and improving. However, this is the first time that it’s available to the general public, and consequently to both teachers and students.

What it can (and can’t) do

AI text applications can produce text at an astounding speed, and quite often it is hard to distinguish them from text written by a human. ChatGPT can write essays, compose poetry, answer questions and even write computer code, and all this in just seconds. De results are quite impressive and might even be better than what most of our secondary school can generate, like this Tweet by Carl Hendrick illustrates:

And if you don’t believe it, feel free to try it for yourself!

But of course, AI also has several limitations. As educators, being aware of the limits of these tools might even be more important than knowing about its strengths. On ChatGPT’s homepage there’s a clear disclaimer that it ‘may occasionally generate incorrect information’ and ‘produce harmful instructions of biased content’. The system is trained on large chunks of the online information, which –as we all know – isn’t always as reliable or politically correct. In de world of tech, this is called GiGo; “garbage in- garbage out”. A chatbot generates text by always adding words that are the most likely to follow the previous words, but it does not check the accuracy of the information. Lastly, ChatGPT was trained on data from before 2021, so it can’t generate text on recent events (yet).

We took it to the test by generating a biographical sketch of one of the authors of this very blog, Paul Kirschner. Chatbot Perplexitiy surprised us by revealing that Paul was an expert on the works of Joseph Conrad who wrote Heart of Darkness, and also sadly announced that Paul had passed away in 2020. Last time we checked, Paul was still alive and an expert on educational psychology rather than literature.

In order to know whether the information in this text is correct or ‘fake news’, you don’t get very far by applying generic skills like looking for citations: this post is trying very hard to look trustworthy. There is only one thing that can help you out here, and that is cold, hard facts.

Also, a lot depends on which ‘prompts’ you give the AI. An artificial intelligence system is kind of like a genie in a bottle, and it’s important to state your wishes very clearly, in a language it understands. This is called ‘prompt engineering’ (formulating your instructions in such a way that it produces the best results) and it’s definitely a new buzzword. Someone who knows how this technology functions, and on top of that knows what an excellent end product looks like (a well-written essay, a stunning illustration, …) will automatically get better results.

Now what?

It’s not the first time technology has entered our classrooms. These past 15 years, many discussions have been conducted in teachers’ rooms all around the world about how to deal with spelling checker, plagiarism, Wikipedia, and automatic translation software like Google Translate and DeepL. And once again, as educators, we’ll have to learn to live with these developments and think critically about how we want to deal with them. As a teacher right now, there are four different possibilities in how to react to text generative AI systems.

  • You can attempt to ban this new technology from your school and classroom entirely, like the schools in New York recently decided, for fear of a negative impact on learning.
  • You could try to outsmart students, by letting them write essays by hand, or letting them write in class with teacher supervision or both. Or you could put all your hopes on AI detection software.
  • You could also reflect on how this new tool could be of value in your class, teach students how to write well, and how they can use tools like ChatGPT to create better texts.
  • Last (and hopefully least), you can just pretend nothing’s ‘wrong’. From time to time you could look at a student’s essay suspiciously, but don’t bother to follow up on it, and hope that the problem will fix itself. Or –when worse comes to worse – you could just eliminate the middle man (the student) altogether, and directly feed your  instructions to the AI, like this (fictional) teacher in Dutch parody newspaper De Speld 😊

The most important reason not to try to ban ChatGPT from your classroom is simply because it probably won’t work anyway. History teaches us that attempts to ban new technology from schools are destined to fail. No matter how cunning you are, students will always find a way around the restrictions.

If a new technology causes problems, one way of solving them is by turning to even newer technologies, in this case, tools like GPTZero. GPTZero calculates the probability of a text being written by a human of an AI. This initiative in the ‘battle against plagiarism’ comes from an unexpected quarter: it was designed by student Edward Tian, because he disapproves of academic dishonesty, but also simply because ‘humans deserve to know’ (Let’s hope Tian has not yet been chased off campus by his fellow students with artificial tar and feathers). OpenAI, the company that developed ChatGPT, revealed that there are plans to digitally watermark AI texts to make them recognisable. But most teachers are probably not too keen on this digital cat-and-mouse game. As Kevin Roose argues in the New York Times: “Several educators I spoke with said that while they found the idea of ChatGPT-assisted cheating annoying, policing it sounded even worse.”

It’s therefore best to use ChatGPT just as we use other tools in the classroom: sometimes students can use them, and sometimes they must practice without. After all, as a teacher, you are (hopefully) the one who decides when students are allowed use their calculators, and when they aren’t. It is appropriate – especially for beginning copywriters – to initially focus on writing without technological assistance. First, teach students how to write a good text themselves, and point out the importance of doing so. And technology can help you do that: ChatGPT can design contrasting examples or generate texts for students to rewrite and improve. You can show students how such a chatbot works, what its strengths and weaknesses are, and how they can use it themselves. 

Several teachers immediately saw the benefits. For example, Belgian teacher Bram Faems used the chatbot as inspiration to create a reading exercise and UK teacher Jonathan Lee had it write an integral lesson plan. The word inspiration is crucial here: ChatGPT can get you started, but obviously a lot of content and pedagogical knowledge is needed to make the right selection from the ideas and bring them to the classroom in the right way. You can use ChatGPT to generate answers to open-ended questions for students to assess for correctness, you can make up questions or tasks for students to formulate an answer to or use ChatGPT as a debating partner to produce arguments for or against a particular proposition. In short, the possibilities are endless for those ready to be creative. However, there is one important prerequisite … 

Knowledge and the curriculum

In his book “Why knowledge matters”, E.D. Hirsch says the following about online search engines:

“The internet thus rewards people who already have a wide knowledge and a big vocabulary. It makes the rich richer5. Google is not an equal opportunity fact finder: it rewards those already in the know. Instead of being an agent of equality, Google rewards cognitive insiders.” (p.83)

This also goes for the use of AI-tools: a professional illustrator who knows how to best use image generating software DALL.E-2, and a professional copywriter who knows what a good article looks like and has sufficient knowledge about the subject, might be able to use AI-tools as a co-pilot to create images and text quickly. Just like we used both DeepL and our knowledge of the English language to help us translate this blogpost and some of the peskier Dutch sayings to English (and yes, we had to make some tweaks manually).

The major pitfall is the assumption that we should no longer teach students to write because they ‘won’t need it later anyway’, but only teach them to ‘critically evaluate’ texts. Like all other ‘generic skills’, thinking critically can’t be taught separately: whether you’re able to think critically about a subject, largely depends on your knowledge of it. Also ‘critically evaluating’ texts generated by ChatGPT presupposes both subject knowledge and knowledge about what a good text and grammatically correct sentences look like.

Former head of CERI at the OECD Dirk Vandamme recently tweeted:

Indeed, banning ChatGPT or other AI tools from the classroom makes no sense. But let us think more thoroughly about the consequences and not repeat the mistakes of the past: the calculator made learning arithmetic unnecessary, Google made knowledge unnecessary, Google Earth made it unnecessary to learn where countries, cities or rivers were. With disastrous consequences. Using this logic, we might as well ditch education with ChatGPT. Now more than ever, more knowledge, understanding and competence is needed.

PS. The fact that AI is here to stay, doesn’t mean that we shouldn’t also consider the ethical challenges that might come with it, such as the consequences of ChatGPT most probably becoming a commercial product. As Marco Kalz phrases it: “It is of course absolutely valid if researchers and teachers are “going with the flow” and show their involvement in the so called “hot topics” in educational technology. The problem is, that this discourse is pushing away the real hot topics in education: Access to education, quality of education and inclusion of non-standard learners into educational systems.” Other concerns, such as the ones Iris van Rooij calls out, such as concern for bias, consent, copyright infringement or harmful content, nor for the environmental and social impact should not be overlooked.