Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design
Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...
10 min read
9ine
:
Dec 19, 2025 7:52:54 AM
Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 6, which concerns the importance of sustainable AI in education.
The Scenario: ‘A school begins using an AI system to control the school’s heating, deciding the temperature based on environmental inputs. No personal data is used but the hardware the system relies on cannot be recycled and it uses vast amounts of electricity. The individual purchasing the system did not ask the vendor about the system’s energy consumption as part of the Vendor Management Process as they did not know that this might be a risk with the use of AI.’
There are high hopes that AI can help tackle some of the world’s biggest environmental emergencies, but there is also a negative side to the explosion of AI and its associated infrastructure on the environment. From powering the data centres that house AI servers, to the consumption of water that is required to cool them, the widespread availability of AI has led to increased carbon dioxide emissions, pressures on the electric grid and the use of natural resources which are becoming scarce in many places. AI infrastructure also relies on high-performance, specialised and short-lived, computing hardware, which produces emissions to manufacture and transport.
It is important for schools to use AI systems that are sustainable, because they have a long-term duty of care to students, staff, and society. But environmental sustainability is not the only consideration. Schools also need to ensure that AI systems are financially sustainable, so that they can justify spending on them to school boards, governors, regulators and parents. Unsustainable AI systems can require expensive long-term licences, lock schools into specific vendors and scale costs unpredictably as usage increases. Schools also need to ensure the educational sustainability of AI tools, including that they are reliable over time (not short-term tools with no long term support) and that they won’t introduce constant platform changes which disrupt learning.
Schools help to shape future citizens, who will live in a world where AI continues to be prevalent, which is why the sustainability of AI is an important issue. Using sustainable AI demonstrates ethical technology use by the school, encourages responsibility, fairness and environmental awareness, and helps students to understand the long-term impacts of technology on the planet and society.
Because of the importance for schools of using sustainable AI, and the fact that most of the AI systems that schools use will be procured from third party vendors, vetting vendors for compliance is critical. Schools need to ensure that the AI systems that they use are sustainable, as well as compliant. They need to make sure that any risks of introducing an AI system to the school are mitigated where possible, and that any residual risks are far outweighed by the benefits that the AI system brings to the school. Before engaging a vendor, schools need to define the purpose of the AI system, including the expected outcomes and performance standards from using it (the benefit). AI can bring many opportunities to schools, but it shouldn’t be introduced just for the sake of it. They will also need to assess the vendor for compliance with the data protection, privacy, cybersecurity, safeguarding and AI requirements that the school is subject to, and ensure that there is an appropriate legal agreement in place with them. Because this vetting (of all vendors the school uses) can take hundreds of hours of manual review, using 9ine’s Vendor Management saves schools time and effort, and helps ensure that you’re only using EdTech that meets required standards.
With the importance of sustainability and vendor vetting, let’s take a look at the issues, risks and safeguards associated with Turing Trials Scenario 6.
Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are:
Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario.
The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are:
Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen:
As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference that the AI system does not use personal data? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why?
We’ve discussed the importance of being aware of the risks associated with using AI, and of only using AI systems that are sustainable, including the vetting of vendors to ensure that the AI systems that they provide are. With these risks constantly emerging and evolving, this can be a complex task, requiring time and expertise that your school may not have. At 9ine we are here to help schools remain safe, secure and compliant, and have a number of solutions that can support you in doing this, these include:
Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...
Join 9ine for the third of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 3, discussing the risks, issues and safeguards...
Join 9ine for the second of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 2, discussing the risks, issues and safeguards...