A long time ago there were weavers.
I mean there still are weavers. Only recently have I been seeing reels on Instagram for a very talented lass weaving Iron Maiden album cover blankets. She’s very talented and you should look her up immediately. Weaving gave us the first practical programmed machines, the Jaquard Looms. If it hadn’t been for their punched-card technologies we wouldn’t have had mechanical fairground organs, Lovelace and Babbage wouldn’t have considered using punch cards for memory in the Analytical Engine… Sorry, distracted there. But man, that’s some gorgeous history right there.

Theirs was manual work, hand-crafting cloth using simple technology. Required skilled labour though. And then some clever bugger comes along and mechanises it. Automates it. What one man could do in a week, this thing could do in a minute (or something like that). Cue complaints that it’s the end of the world, technology is going to ruin everything. And yet we’re still here. The printing press is going to mean the end of writing (spoiler, it didn’t). The computer is going to mean the end of mankind as we know it and yes, yes it did. Mankind as we knew it, pre-computer, was a different person. Now we’ve got computers, though, mankind as an overall entity is still here. As far as we know. Hell, we could all be in a computer simulation of the universe. Which does making writing this post on a computer – or rather a simulation of a computer inside a simulation of a consciousness – a very meta thing to do. But the crucial thing is, we’re still here. We just look at things differently.
I think the point I’m aiming for here is that technology is always advancing. Things move on. The goalposts for what will destroy mankind are always moving. General Purpose Artificial Intelligence is one of those existential threats that somehow seems massive – now – but will adjusted to, incorporated into our everyday world, and just become part of the furniture. Either that or come 2043 it’s taken over the world completely, destroyed Humanity, and I’ll feel foolish about these words.
The way I see it, we’ve 2 potential futures ahead of us. Behind door 1 is The Terminator future. Rise of the machines. Technology going from being our slave to being our master. And there’s plenty of other examples of this in dystopian fiction. On the other hand, we’ve door 2. Behind door 2 is the Star Trek future. The protopian future (a word so unusual that my spellchecker doesn’t recognise it). There’s not a lot of protopian fiction out there but I would recommend starting with the “News From…” series by Robert Llewellyn (https://www.fantasticfiction.com/l/robert-llewellyn/news-from/)
Right at the moment, we’re hovering somewhere in front of these two doors. We’re trying to make up our mind which one we want to open, which way we want our future to go. If the Star Trek model teaches us anything, it’s that things might have to get worse before they get better. But that they will get better.
In the classroom we are having important discussions about AI as it exists now – and this is as bad as it is going to be, technologies like this will either get better or they’ll die out. We’re trying to teach students to use their own brains as much as possible no matter how much easier it is to rely on an outside source.
We cannot ignore AI. Much as we’d like to, it’s being crowbar’d into everything. Remove the human, replace with the AI, profit seems to be the current business model. The Department for Education in association with the Chartered College of Teaching have done a training course for teachers on effective use of AI. While it doesn’t aim at any one particular flavour, it does drive home the point that you need to double-check everything the AI has produced in order to make sure it’s fit for purpose. And, in all honesty, by the time you’ve done that you might as well have created whatever it was in the first place. This checking is the step students need to know enough to do. And I’ve seen people swear blind that you don’t have to do this, that the AI is never wrong – or has never been wrong for them so you must be doing something wrong.
So this term I will be:
- Introducing year 7 students to AI – we’re using Google’s Teachable AI. We’ll train it to tell the difference between 2 things and then see how it does.
- Pointing year 7 and 8 students at the Royal Institution Christmas Lectures from 2023 – https://www.rigb.org/christmas-lectures/watch-2023-christmas-lectures -so they can get a better understanding of how these things work.
- Encouraging year 9 and up to engage with AI in a critical way, getting it to act as a partner or coach for them. I’m particularly interested to see how this can help them learn to write Python, test their understanding of code by creating problematic code that they can then fix. If there’s one thing I know AI is good at, it’s creating broken code!
Meanwhile I’ll continue to remove AI from the platforms and systems I use. No, Outlook, I don’t want your help with this email, please just remove the CoPilot button completely? Windows? No. Just no. Happy to see if it can help me where I want it to, but please make every single AI integration something you deliberately have to opt in to and something it’s very easy to disable.
Now, let’s just hope humanity makes the right choice. And that choice isn’t door 3, behind which there’s a tiger with a gun waiting for us.

