Technology & Innovation - Issue 11

GPT-4: “ It looks like you’re feeling pretty happy and cheerful, with a big smile and evenmaybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes? ” Presenter: “ The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are. ” GPT-4: “ Oh stop it, you’re making me blush. ” The Family Online Safety Institute (FOSI) has conducted a study, looking at the emerging hopes and fears that parents and teenagers have around GenAI. According to one teenager, “ Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel – because words are powerful. At the end of the day, it can always help in an emotional and mental way. ” The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. These AI tools canmimic human-like conversations, but their outputs are based on patterns and data – not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to being manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. Long-termharms Another widely publicised use of GenAI is its ability to create ‘deepfakes’. Any readers who have watched Indiana Jones and the Dial of Destiny will have seen this technology in action for themselves, making Harrison Ford appear as a younger version of himself. This isn’t necessarily a bad use of GenAI technology per se , but the use of Deepfake technology can quickly become deeply problematic. Take, for example, the case of one teacher arrested for creating a deepfake audio clip of his school’s principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate it. Easy-to-use Deepfake tools are now freely available and, in common with other visual tools of the past, can be used inappropriately to cause damage or even break the law. The use of deepfakes in pornography is one such example, and a particularly dangerous development for young women, against whom it may be directed for fraudulent, abusive and coercive ends. This could cause severe, long-lasting emotional distress and harms to the individuals depicted, while at the same time reinforcing damaging stereotypes and perpetuating the objectification of women. Unforeseen consequences Technological developments causing unforeseen negative consequences are nothing new, of course. As educators, a significant part of job revolves around helping young people navigate a fast-changing world and preparing them for their futures. In this respect, education has an essential role to play in helping people better understand AI technologies and avoid related dangers. Our approach at the Raspberry Pi Foundation is to not focus purely on those threats and dangers, but to teach young people how to be critical users of technologies, rather than passive consumers. Possessing an understanding of how those technologies work goes a long way towards achieving the AI literacy skills needed to make informed choices, which is why we developed our free Experience AI programme (see rpf.io/expai2024 ). Taking a problem-first approach doesn’t, by default, prevent AI systems from causing harm– there’s still the chance of them increasing bias and societal inequities, after all. What it does do, though, is focus development efforts on end users and the nature of the data used to train the models in question. My worry is that focusing primarily onmarket share and potential opportunities, rather than on the problems that AI can be used to solve, is more likely to lead to harm. First principles Our resources are also underpinned by teaching around fairness, accountability, transparency, privacy and security in relation to the development of AI systems. These principles are aimed at ensuring that the creators of AI models develop those models as ethically and responsibly as possible. They also extend to consumers, since we need to get to a place in society where we expect these principles to be adhered to, with consumer power seeing to it that any models which don’t, ultimately fail. Our call for educators, carers and parents would be to start conversations with your young people about GenAI. Get to know their opinions and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. ABOUT THE AUTHOR Ben Garside is a senior learning manager at the Raspberry Pi Foundation, having previously been a classroom teacher for 14 years; for more information, visit raspberrypi.org or follow @RaspberryPi_org (X) 11 teachwire.net D E V E LO P M E N T S

RkJQdWJsaXNoZXIy OTgwNDE2