Technology & Innovation - Issue 11

Weighing up THE BENEFITS Given the disruptions we’ve seen AI cause already, it’s vital that we foster students’ ability to think about the technology critically, states Ben Garside S taying current with the latest advancements in technology – particularly AI and generative AI (GenAI) – can be tough, even for tech enthusiasts. I have to admit to sometimes feeling this way myself, but one new development fromOpenAI earlier this year really caught my attention. The company introduced a new version of its ChatGPT chatbot – GPT-4 – with a new female-sounding voice. The accompanying launch video showcased the model’s ability to assist users with tasks like solving maths problems and offering presentation tips, all while maintaining a friendly and cheerful tone. What’s the problem? With big tech companies vying for market share in what’s rapidly become a highly competitive space, the addition of voices to these AI models was perhaps always inevitable. It still got me thinking, though – why add a voice? And why did we have to see this particular model flirt with the presenter during the launch? Working in the field of AI, I’ve always seen AI as a powerful problem-solving tool. With GenAI, however, I’ve often wondered about what problems its creators are actually trying to solve, and howwe can help young people better understand the technology. And the fact is, I’m still really not sure. That’s not to suggest that I don’t think GenAI has any benefits, because it does. I’ve seenmany great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, help differentiate work for students with additional needs, and create example answers to exam questions that their students can then assess against the mark scheme. Educators are creative people, and whilst it’s cool to see so many good uses of these tools, I can’t help but wonder if the developers had any specific problem-solving applications inmind when creating them, or whether they simply hoped that society would find good uses for them somewhere further down the line? So whilst there are some good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. Troubling issues Anthropomorphism refers to the assigning of human characteristics to things that aren’t human – something we all do, all the time, without consequence. When we do the same with AI, however, we soon run into troubling issues. We’ll commonly give names to inanimate objects (I call my vacuum cleaner ‘Henry’, for example), but the difference with GenAI is that chatbots are deliberately designed to be human-like in their responses, to the point where it’s easy for people to forget that they’re not actually speaking to a human. As feared, evidence has now started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Observe the following exchange between the presenters at the GPT-4 launch and the model itself: “The prospect of teenagers seeking solaceandemotional support fromagenerativeAI tool is aconcerningdevelopment” 10 teachwire.net

RkJQdWJsaXNoZXIy OTgwNDE2