Skip to the main content.

11 min read

Turing Trials Scenario 5: The risks of AI-driven image exploitation in schools

Turing Trials Scenario 5: The risks of AI-driven image exploitation in schools
Turing Trials Scenario 5: The risks of AI-driven image exploitation in schools
19:56

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 5, which concerns the risks to individuals and the school of the availability of AI to exploit images of students and staff.  

The Scenario: ‘A school takes photos of staff and students at a school event and a teacher puts these on the school’s public social media site to showcase the school to prospective students as there is no policy in place for Image Capture and Use. A cyberattacker contacts the school to say that they have generated inappropriate images of some students using an AI tool and will release them if the school does not pay them a ransom.’ 

AI-Driven Image Exploitation in Education 

Schools have long taken photographs of staff and students for a range of practical, educational, safeguarding and administrative reasons. From photos for staff and student ID badges, to recording learning activities or school events, and communicating to parents and the wider school community through newsletters, social media and other promotional materials. However, the increasing availability of AI has created new risks for schools when it comes to image exploitation. 

The increasing availability of AI tools requiring little specialist knowledge has resulted in the ability for individuals to generate ‘deep fakes’. These are videos, images or audio clips made with AI to look real. Deep fakes can be used for fun, but often they are used to impersonate people and deliberately mislead others, requiring just a picture of an individual’s face to create. Deep fakes have already caused harm to individuals in schools, where they have been created by students of students, by students of teachers and even by teachers of teachers.      

The availability of this technology has created a new threat for schools, particularly in relation to the images that they make publicly available, but it’s not only students and teachers generating the risks. With their reputation as a priority, and because of their legal child protection and safeguarding duties, schools have long been high-value, easy targets for cyber criminals, and the availability of AI tools which can be used to create deep fakes has created new opportunities for them to attack. Cyber criminals can profile schools by researching their websites, social media and newsletters to gather images of staff and students, and use AI tools to produce child abuse or explicit content, demanding a ransom payment to prevent the release of doctored images. The creation and availability of these images can lead to mental and emotional distress for the students, parents and staff involved. It can also lead to potential liability for the school under child protection and safeguarding laws, as well as potential litigation from parents if this known risk hasn’t been assessed and effectively managed. Schools can also face reputational damage when these situations arise, from an erosion of trust among parents, staff, and the wider school community to negative coverage of them in the media. The potential transformation of images that the school makes publicly available into inappropriate content amplifies the potential harm and the sense of urgency for schools to respond. Let’s take a closer look at this Scenario.  

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are: 

  • Lack of Policy: There is not an appropriate policy, or aspect of a policy in place which governs the use of AI or personal data. Image Capture and Use policies are important for schools. They protect children and staff, reduce safeguarding risks, ensure legal compliance, and prevent misuse of images, especially in an age of AI, social media, and deep fakes. By not having an Image Capture and Use Policy in place, it means that there is a lack of governance over when photos can be taken of students and staff, by who, what they can be used for, and how they will be shared and protected. A lack of policy essentially allows a ‘free for all’ for the taking and using of images of staff and students in the context of the school. It also means that it cannot be said that the actions of the teacher were necessarily wrong in this Scenario, because the school has not made it clear how images of staff and students can (and cannot) be used. In this Scenario it says that ‘A school takes photos of staff and students at a school event and a teacher puts these on the school’s public social media site to showcase the school to prospective students as there is no policy in place for Image Capture and Use.’ Given the heightened risks which the increasing use of AI creates relating to images of staff and students and the importance of a policy to protect against these, this is an Issue that will need to be investigated (particularly with what happens next in the Scenario). 
  • Security Breach: Someone has unauthorised access to a school’s network, device, program, facility or data. A personal data breach is a type of security breach where personal data is accidentally or unlawfully lost, destroyed, corrupted or shared. These breaches can result from malicious attacks, where there are deliberate attempts from to access, steal or manipulate personal data, or system and technical failures, where technology fails or is misconfigured, but they can also be a result of human error. For example, devices containing personal data might be lost or photographs or documents could be shared online without proper consent. Breaches can lead to fines, regulatory action, lawsuits, reputational damage, and harms to individuals, which is why it is important for a school to put technical and organisational measures in place to prevent them. In this Scenario, it says that ‘ A cyberattacker contacts the school to say that they have generated inappropriate images of some students using an AI tool and will release them if the school does not pay them a ransom.’ Here, the school is being asked to pay money to prevent the release of the ‘inappropriate images’ that the cyberattacker claims to have generated of some students, which should instantly be recognised as a data protection, safeguarding and extortion risk. Schools will need to investigate whether there has been a breach, and if so, whether the school is responsible for it.
  • Lack of Training/Awareness: An individual has acted in a way that indicates that they are not aware of the risks of AI or how to use an AI system. Given the evolving threat landscape due to the increasing availability of AI, it is important that staff and students have an awareness of the risks of AI to education. Not all risks will be due to the way that the school specifically is using AI, and may come from the availability of AI to wider society too, meaning that their responsibility for protecting students can go well beyond the school gates. In this Scenario, there are a number of issues that may suggest that there has been a lack of training/awareness at the school, from the fact that the teacher put the photos of staff and students on the school’s public social media site (given the risks of doing so), to the fact that there was not a policy in place for image capture and use (which given the importance of having one, is a massive oversight). This Scenario indicates that both the teacher and those responsible for putting a policy in place have a lack of awareness about the risks for staff, students and the school when it comes to AI, making this an Issue that needs to be investigated. 

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Policy Introduction/Update: The school introduces a new policy or updates an existing one to govern how AI systems should be used compliantly and ethically. To prevent the risks of misuse of images of staff and students, the school in this Scenario should introduce an Image Capture and Use Policy, to set clear rules for how photos and videos of students and staff are taken, stored, shared, and reused. This policy should include at least: information about why the school captures these; when consent is required; who is allowed to take them; the devices that can be used; and any safeguards that apply e.g. restrictions on close-ups or identifying shots. It should also specifically cover the sharing and publication rules for videos and images, including how they can be used on the school’s website and social media for marketing and promotional purposes. If the school had an appropriate Image Capture and Use Policy in place in this Scenario, then this could have either prevented the teacher from putting the photos of staff and students on the school’s public social media site, or ensured that the staff member had only done this in line with the school’s policy. If the school had a policy in place and the teacher had gone against it, then the policy would have also set out what would happen if the policy was breached, which might have included disciplinary measures for the teacher involved. Having an Image Capture and Use Policy in place in this Scenario would have reduced the risk that these images would have been placed online without the risks of misuse of images of staff and students being assessed and mitigated, and provided the school with a method of recourse against anyone who breached the policy. 
  • Breach Response: A school takes steps to respond to a security breach, including restricting access and notifying relevant individuals. As one of the Issues highlighted in this Scenario is that there may have been a Security Breach, the school will need to take steps to respond to this. They will need to act quickly, calmly and in a structured way to protect students and staff, stop further harm, meet their legal requirements and learn from the incident, documenting the steps that they take on the way. In this Scenario, the school will need to identify where the cyberattacker sourced any original images from, which students were affected, and confirm whether there has been a breach, taking steps to contain it immediately if there has been. Containment should include removing images from the school’s social media site which do not comply with the school’s Image Capture and Use Policy to prevent further harm, but also reviewing all of the school’s publicly accessible areas for any other images which are in breach of it to prevent further misuse. The school would then need to escalate this incident to the relevant individuals internally, which should include at least the Headteacher/Principal, Designated Safeguarding Lead and Data Protection Officer or Data Privacy Lead for the school. If a breach is confirmed, the school should assess the risk to the impacted individuals, which will include the students that the cyberattacker is claiming to have made inappropriate images of, but also any other staff and students whose images have been made publicly available by the school outside of policy. Schools will need to understand the specific risks to the individuals affected, which can include emotional, reputational or physical. Based on the investigation and risk assessment, schools will need to decide whether there has been a breach which needs to be reported to regulators, including data protection and safeguarding authorities. As the cyberattacker is demanding payment to prevent the inappropriate images from being released, the school should also be escalating this to the police. Schools may also need to inform the affected students and their parents, explaining what has happened, what steps are being taken by the school and any steps that the individuals can take to protect themselves. Schools should also review all relevant policies to ensure that they cover this type of Scenario and introduce or update them where necessary to prevent it from happening again (as mentioned above). In this Scenario, we know that the School did not have a policy in place, so the school should discover this as part of the Breach Response (if not before). Breaches happen, but responding to them effectively minimises the risks to those impacted, preserves trust where possible and can stop them happening again, which is why this Safeguard is important. 
  • Training/Awareness Activities: A school provides training and awareness-raising activities to relevant individuals, including on how AI systems work, should be used and what the limitations and potential risks are of using AI. We’ve highlighted that training and awareness may have been an Issue in this Scenario, because images of staff and students were uploaded to the school’s social media site despite the risks, and because the school did not have an Image Capture and Use Policy in place. Schools should provide training and awareness-raising activities to various individuals at the school following this type of Scenario happening, to prevent it happening again. If the school introduces an Image Capture and Use Policy, then they will need to provide training to all relevant individuals that will be taking or handling images on behalf of, or at the school on this. They should also make anyone who will have their photograph taken in the context of the school aware of this policy, including the parents and guardians of children who will have their images captured. Training may also need to be provided to the individuals at the school who were responsible for ensuring that the school had an appropriate Image Capture and Use Policy, which is likely to be the school’s leadership team, including the Headteacher/Principal. These individuals should have been aware that the school needed to have a policy in place, and ensured that it covered the increasing risks to individuals which AI presents.  

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issue: Lack of Transparency and Safeguards: Transparency. Whilst this Scenario doesn’t specifically highlight that there has been a lack of transparency by the school in the way that photos of staff and students would be used, do you think that this would be an Issue that schools would need to investigate? Do you think that if photos of students were used by a cyberattacker to create inappropriate images, then whether the school had been transparent that images of them would be made publicly available online (including the risks of this) would have made a difference in this Scenario? 
  • Issue: Re-Purposing of Personal Data and Safeguards: Purpose Review. If the images of staff and students were originally taken for a different purpose and then used to showcase the school to prospective students, would this have made a difference in this Scenario? Would the teacher have potentially been breaching any other policies by using these images for a different purpose than the one for which they were originally taken?  

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Would it matter whether the school had the consent of the students to post the images online? Would it make a difference depending on the number of students impacted? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

Navigating the increasing and evolving risks of AI can be difficult for schools, particularly where they want to keep sharing visual evidence of the successes and experience of the school community more broadly. At 9ine we have a number of solutions that can support you in this evolving threat landscape, these include: 

  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 
  • Incident Management: Whether you are an existing client, or a new client that needs support, our consultants can support you in managing breaches end-to-end when they occur. They can advise you on what you need to do to investigate the breach and remedy it, understand whether it is reportable to regulators, and whether it is notifiable to the individuals impacted. They can also help you to respond to the communications from vendors and the reporting requirements to regulators, making sure that the information that you provide is fit for purpose. Contact us if you have any questions, or need support.
Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs

Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs

Join 9ine for the third of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 3, discussing the risks, issues and safeguards...

Read More
AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

Introducing ‘Turing Trials Walk-throughs’, our weekly guide between now and the end of 2025, which takes a look at each of the Scenarios in Turing...

Read More
Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...

Read More