Skip to the main content.

10 min read

Turing Trials Scenario 6: The importance of sustainable AI in schools

Turing Trials Scenario 6: The importance of sustainable AI in schools
Turing Trials Scenario 6: The importance of sustainable AI in schools
19:56

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 6, which concerns the importance of sustainable AI in education. 

The Scenario: ‘A school begins using an AI system to control the school’s heating, deciding the temperature based on environmental inputs. No personal data is used but the hardware the system relies on cannot be recycled and it uses vast amounts of electricity. The individual purchasing the system did not ask the vendor about the system’s energy consumption as part of the Vendor Management Process as they did not know that this might be a risk with the use of AI.’

Sustainability of AI Systems

There are high hopes that AI can help tackle some of the world’s biggest environmental emergencies, but there is also a negative side to the explosion of AI and its associated infrastructure on the environment. From powering the data centres that house AI servers, to the consumption of water that is required to cool them, the widespread availability of AI has led to increased carbon dioxide emissions, pressures on the electric grid and the use of natural resources which are becoming scarce in many places. AI infrastructure also relies on high-performance, specialised and short-lived, computing hardware, which produces emissions to manufacture and transport.

It is important for schools to use AI systems that are sustainable, because they have a long-term duty of care to students, staff, and society. But environmental sustainability is not the only consideration. Schools also need to ensure that AI systems are financially sustainable, so that they can justify spending on them to school boards, governors, regulators and parents. Unsustainable AI systems can require expensive long-term licences, lock schools into specific vendors and scale costs unpredictably as usage increases. Schools also need to ensure the educational sustainability of AI tools, including that they are reliable over time (not short-term tools with no long term support) and that they won’t introduce constant platform changes which disrupt learning. 

Schools help to shape future citizens, who will live in a world where AI continues to be prevalent,  which is why the sustainability of AI is an important issue. Using sustainable AI demonstrates ethical technology use by the school, encourages responsibility, fairness and environmental awareness, and helps students to understand the long-term impacts of technology on the planet and society. 

The importance of Vendor Vetting when it comes to AI 

Because of the importance for schools of using sustainable AI, and the fact that most of the AI systems that schools use will be procured from third party vendors, vetting vendors for compliance is critical. Schools need to ensure that the AI systems that they use are sustainable, as well as compliant. They need to make sure that any risks of introducing an AI system to the school are mitigated where possible, and that any residual risks are far outweighed by the benefits that the AI system brings to the school. Before engaging a vendor, schools need to define the purpose of the AI system, including the expected outcomes and performance standards from using it (the benefit). AI can bring many opportunities to schools, but it shouldn’t be introduced just for the sake of it. They will also need to assess the vendor for compliance with the data protection, privacy, cybersecurity, safeguarding and AI requirements that the school is subject to, and ensure that there is an appropriate legal agreement in place with them. Because this vetting (of all vendors the school uses) can take hundreds of hours of manual review, using 9ine’s Vendor Management saves schools time and effort, and helps ensure that you’re only using EdTech that meets required standards.

With the importance of sustainability and vendor vetting, let’s take a look at the issues, risks and safeguards associated with Turing Trials Scenario 6. 

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are: 

  • Process Not Followed: A process has not been followed e.g. vendor management, ethics, privacy, or security by design, meaning that risks may not have been identified and mitigated. We have highlighted the importance of vendor vetting when it comes to the procurement of AI systems, to protect not only the school, but also the individuals using the system, or whose personal data will be shared with it. In this Scenario, it clearly states that ‘The individual purchasing the system did not ask the vendor about the system’s energy consumption as part of the Vendor Management Process as they did not know that this might be a risk with the use of AI.’ This means that, either the individual has not followed the school’s Vendor Management Process correctly (as they did not ask the vendor about the AI system’s energy consumption), or that the school’s current Vendor Management Process does not require them to do this. Given the importance of schools using sustainable AI, the school in the Scenario will need  to investigate this, to understand where the issue lies. Also, as there has either been in a gap in the individual’s knowledge of the Vendor Management Process, or a gap in in the process itself, the school may also need to investigate whether there are any other process or knowledge gaps (such as gaps relating to compliance with data protection, privacy, cybersecurity, safeguarding and AI requirements) that the school is subject to. 
  • Unsustainable AI use: The AI system uses resources in a way which harms the environment. We have discussed the sustainability issues which the use of AI can create, not only to the environment, but to financial and educational sustainability too. This Scenario states that ‘...the hardware the system relies on cannot be recycled and it uses vast amounts of electricity. The individual purchasing the system did not ask the vendor about the system’s energy consumption..’, which indicates that there may be an environmental sustainability issue with the AI-powered heating system. Because the individual has not asked the vendor for details about its energy consumption, they would have been unable to perform an analysis of whether the environmental impacts and risks are far outweighed by the benefits that the AI system brings to the school. This may also mean that there are other aspects of sustainability that the school has not assessed, making this an Issue that needs to be investigated.   
  • Lack of Training/Awareness: An individual has acted in a way that indicates that they are not aware of the risks of AI or how to use an AI system. With the risks that AI can introduce to schools, it is important that everyone at the school has the appropriate level of AI literacy in line with their role and responsibilities. In this Scenario it states that ‘The individual purchasing the system did not ask the vendor about the system’s energy consumption as part of the Vendor Management Process as they did not know that this might be a risk with the use of AI.’ Given the risks of AI, and the importance of the school using AI systems that are sustainable (as well as compliant), the individual responsible for purchasing the system clearly did not have the level of AI literacy that they needed to do this, because they were not aware of the risks of using an AI system that was unsustainable. The lack of awareness of the risks of AI would also mean that the individual would not have been able to spot if the Vendor Management Process did not include this requirement. The school will need to investigate where the lack of training and awareness originated from, including whether training was provided to individuals, and if so, whether it was sufficient for their roles and responsibilities.

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Repeat or Complete a Process: Make sure that the school puts the AI system through a relevant process e.g. ethics by design, privacy by design, vendor management etc. Because of the importance of vendor vetting, to ensure AI systems are compliant and sustainable, in this Scenario the school could have repeated the Vendor Management Process, ensuring that it required the individual to get any information from the vendor needed to assess any risks relating to the sustainability of AI, and that they did this in practice. This would have meant that the individual would have asked about the system’s energy consumption as part of this process, as well as any other information required to assess the sustainability and compliance of the AI system. This would have allowed the individual to identify if there were any risks to the school of using the AI system, before they made a decision to purchase. 
  • Sustainability Check: A school makes sure that the use of AI is sustainable and that its use of resources is justified by the benefits that it brings to the school. As well as asking for the information from the vendor required to perform the analysis of whether an AI system is environmentally, financially and educationally sustainable, schools will also need to actually conduct this analysis. In this Scenario, if the individual had asked for the energy consumption of the AI heating system (as well as any other information required) then they would have been able to identify any risks to the school that using the AI system created. From this analysis, they would have been able to put mitigations in place for these risks (potentially via the vendor before signing an agreement with them), or ultimately decide that the benefits that the AI system would bring to the school are not justified by the risks that it presents.
  • Training/Awareness Activities: A school provides training and awareness-raising activities to relevant individuals, including on how AI systems work, should be used and what the limitations and potential risks are of using AI. The school could ensure that all individuals had the appropriate level of AI Literacy for their role and responsibilities, making it less likely that the individual would have not been aware of the environmental risks associated with using AI. This could be done through role-based training (which 9ine offers through Academy LMS) followed up with school-specific training and awareness-raising activities throughout the academic year. Providing training  would also ensure that the individual had been made aware of other risks associated with using AI that they need to consider when selecting AI systems that the school will use. Ensuring that all individuals had the appropriate level of AI Literacy at the school would also mean that whoever is responsible for the Vendor Management Process would have reviewed it regularly, to ensure that it included steps to request all relevant information from the vendor, and how to analyse this information to identify if there were any risks associated with using the AI system.

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Lack of Policy and Safeguards: Policy Introduction/Update. Even if the individual had all of the information they needed to assess the risks relating to the AI system, which policy, or policies, at the school would the individual check to see whether the system could be purchased? Is your school confident that it has policies in place to cover all of the risks associated with the use of AI systems in schools?

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference that the AI system does not use personal data? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

We’ve discussed the importance of being aware of the risks associated with using AI, and of only using AI systems that are sustainable, including the vetting of vendors to ensure that the AI systems that they provide are. With these risks constantly emerging and evolving, this can be a complex task, requiring time and expertise that your school may not have. At 9ine we are here to help schools remain safe, secure and compliant, and have a number of solutions that can support you in doing this, these include: 

  • Contract: Schools often struggle with fragmented purchasing, departments operating in silos, and no clear visibility into total spending. Because of this, it can be difficult to understand whether the AI systems a school is using are financially sustainable. Without a centralised system, duplicate subscriptions, missed contract renewals, and unexpected costs become the norm, draining budgets and creating unnecessary stress. Contract enables schools to centralise EdTech spend for more effective financial management, all possible on the 9ine Platform. Contact us if you would like to find out more about how Contract can help your school. 
  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels (including modules on Vendor Management) you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 
  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to make sure that it is sustainable. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards, or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements or configuring the tool for safety and sustainability. Contact us if you would like to find out more. 
Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...

Read More
Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs

Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs

Join 9ine for the third of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 3, discussing the risks, issues and safeguards...

Read More
Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring

Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring

Join 9ine for the second of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 2, discussing the risks, issues and safeguards...

Read More