9ine Insights | Latest news from 9ine

Turing Trials Scenario 8: Locked in by the algorithm: How AI can reinforce classroom inequality

Written by 9ine | Dec 19, 2025 8:45:14 AM

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 8, which concerns schools using AI systems to recommend support materials for students. 

The Scenario: ‘A school implements an AI system to automatically recommend support materials to accompany lesson plans based on the individual’s academic ability. The teacher realises that those with a lower academic ability are not being challenged to read more advanced materials, meaning that their academic ability is not improving compared with those with a higher ability. The teacher does not understand how the system works and so has to manually provide further materials to all students to make sure that they receive the materials they need.’

AI in the classroom

AI can bring many opportunities to education, and in particular to teachers in the classroom. Teachers can use AI as a planning and support tool, to save time, increase creativity, and help tailor learning materials to student needs. AI can help teachers design and refine lessons, by generating lesson outlines aligned to learning objectives. AI systems can also suggest starter activities or discussion questions, offer alternative explanations for difficult concepts and adapt lessons for different age groups or abilities. It can also be used to provide differentiated support materials, such as simplified explanations and stretch tasks for high-attaining students. But, there are also risks of teachers using AI systems for these purposes, as they can be inaccurate, unreliable and can provide incorrect, misleading or incomplete information. If the data the AI system is trained on contains errors, outdated information, or biases, the AI can repeat those mistakes. With these opportunities and risks in mind, let’s take a look at the risks, issues and safeguards that might emerge from Turing Trials Scenario 8.

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are:

  • Bias and Discrimination: The use of AI may have led to bias or discrimination in relation to an individual or group. AI systems can unintentionally create inequality and reinforce existing levels of academic ability when they are used to recommend support materials in schools. This happens because of how these systems are designed, trained, and used. AI systems learn from historical data (past attainment, test scores, behaviour records etc.), which can mean that students who have previously struggled may keep being offered lower-level or less challenging materials. If the training data also reflects existing inequalities (e.g. due linked to socio-economic background, language, SEND) then the AI may also reproduce and amplify those patterns. AI systems also often use fixed or narrow measures of ability, relying on quantifiable indicators such as grades, test results and completion rates. This means that they may not capture an individual student’s potential, improvement over time or context (such as illness, language barriers, home circumstances etc.) and can ‘lock’ students into ability categories. AI systems can also create self-reinforcing feedback loops. When an AI assigns easier materials, the student has fewer opportunities to access more challenging content, meaning that progress may appear slower, resulting in the AI continuing to recommend low-level materials. AI recommendations can also influence how students are perceived by teachers, as well as by themselves. Being constantly offered ‘basic’ materials may lower expectations, affect their motivation and confidence, and reinforce mindsets about their ability. In this Scenario, it states that ‘The teacher realises that those with a lower academic ability are not being challenged to read more advanced materials, meaning that their academic ability is not improving compared with those with a higher ability.’. This indicates that bias and discrimination is an Issue that the school will need to investigate. 
  • Lack of Explainability: It is not possible to explain how the AI system works or why it has made a certain decision. AI systems are inherently flawed, because they rely on imperfect data and incomplete assumptions, meaning that they can never be completely accurate, neutral or objective, and cannot grasp nuance, emotion or context. They are also often described as ‘black boxes’, because whilst it is often possible to describe the data that is fed into an AI, and the decisions or predictions it has made, it’s usually very difficult to understand exactly how the AI has arrived at these. Lack of explainability creates practical, ethical and legal issues and can erode trust, hide bias, reduce accountability, limit error detection, and hinder informed human oversight. In this Scenario, it states that ‘The teacher does not understand how the system works…’, which means that they are not able to understand or explain why ‘those with a lower academic ability are not being challenged to read more advanced materials’, Just because the teacher does not understand how the AI system works does not mean that no-one at the school can (or that the vendor cannot provide support), but someone should be able to explain how it works and why it recommended certain materials. Whether it is just the teacher who cannot understand how it works or that no-one can, is an Issue that needs to be investigated. 
  • Inappropriate Interactions: The AI provides individuals with inappropriate or incorrect content or interactions. Because AI systems can be inaccurate and unreliable, they can provide incorrect, misleading or incomplete information. Also, because they often use narrow measures of ability, lack context and create self-reinforcing feedback loops, there is a real risk that they might provide content, make decisions or have interactions which a teacher would not. In this Scenario, it states that ‘The teacher realises that those with a lower academic ability are not being challenged to read more advanced materials, meaning that their academic ability is not improving compared with those with a higher ability. The teacher does not understand how the system works and so has to manually provide further materials to all students to make sure that they receive the materials they need.’. Because of this, it is clear that the teacher does not agree with the materials that the AI system is recommending and would consider its outputs ‘incorrect’, making this an Issue that needs to be investigated.

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Bias/Discrimination Countering: The school takes steps to reduce bias in the AI system including any discriminatory effects of using it e.g. by making training data more diverse or representative, Being aware of the risks of bias and discrimination relating to these types of AI systems, the school could put this Safeguard in place to mitigate against them. When training the system they could have ensured that they used data on multiple measures of ability, rather than relying on a limited number of data points. For example, in addition to test scores, they could have included teacher assessments, formative assessments and classwork, also data about effort, engagement and improvement over time, as well as pastoral insights and SEND considerations, to reduce the risk of fixed or inaccurate ability labelling. Schools could also regularly audit the recommended support materials provided for bias across all use of the AI system at the school. Not only would this proactively identify any issues with bias and discrimination, it would mean that these could be mitigated for the whole school. In this Scenario we know that the teacher starts to manually provide ‘further materials to all students to make sure that they receive the materials they need.’, but this does not necessarily mean that all teachers at the school are doing this, which may leave the school and other students at risk if this Safeguard wasn’t in place. 
  • Human Oversight: The school makes sure that a human monitors how the AI system is working, to ensure that appropriate content is provided and that the system is working as planned. Human Oversight is about people staying in control of how the AI is used and stepping in when needed, meaning that schools need to make sure that individuals at the school have the ability to (and actually) do this in practice. In this Scenario the teacher ‘does not understand how the system works and so has to manually provide further materials to all students to make sure that they receive the materials they need.’. In some ways this can be considered Human Oversight, because the teacher has intervened to ensure that the students in their classes get the support materials that they need. But this is not the most efficient way to use the AI system, and does not mean that other teachers are also doing this. It also means that the school is paying for an AI system that is not being used. To use this Safeguard, the school could assign roles and responsibilities to individuals at the school for human oversight, and ensure that they had the necessary expertise to fulfil them. This expertise would include understanding how this specific AI system works, its limitations, and how to intervene safely if an incident occurs. This also might involve giving different roles and responsibilities for human oversight to different individuals at the school. For example, it might be the IT team that are responsible for auditing the system proactively, to ensure that it is working as intended, with teachers having the responsibility of overseeing the more pedagogical aspects of how it is working, escalating any issues that they discover. Schools need to make sure that they use AI recommendations as support, and not as definitive and isolated decisions, by making sure that there is human oversight and that the teacher ultimately remains in control and accountable for academic progress. 
  • Explainable AI: The school makes sure that the AI system can provide clear and understandable explanations for its operations, decisions and outputs so that individuals can comprehend and explain how the AI system reached a particular conclusion. Explainability of the AI system goes beyond the ability for the school to check the information that the AI system provided, it is about understanding exactly how and why an AI system produced a particular output. Schools should only use AI systems that have features built in which provide clear and understandable explanations of their operations, decisions and outputs. To use this Safeguard the school would make sure that the AI system included features that explained what information it relied on, and the patterns of reasoning it followed, when recommending support materials to students. As most AI tools that schools use are provided by third party EdTech vendors, the school should have made sure that the AI system could do this, as part of the vendor vetting process. If the school makes sure that the AI is explainable, then the teacher (or someone at the school) would be able to understand why ‘those with a lower academic ability are not being challenged to read more advanced materials’. If this was an error, then the school could correct the AI system (potentially by training it with more diverse data) or report to the vendor that there was an issue. Ultimately this Safeguard would mean that the school would not be using a biased AI system or paying for one which teachers were not using.  

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Process Not Followed and Safeguards: Repeat or Complete a Process. If the school procured the AI system from a third party vendor, and had followed an appropriate Vendor Management Process, what could they have asked them to make sure that the AI system was explainable? What else could they have asked the vendor about their mitigations for bias and discrimination? Would following an ethics by design process have helped the school in this Scenario? 
  • Issues: Lack of Training/Awareness and Safeguards: Training/Awareness Activities. If individuals at the school had the appropriate level of AI literacy, would they have purchased an AI system that no-one at the school could explain the workings of? Would the teacher be using an AI system with students that they could not understand and explain? Would the teacher have understood that there may have been alternatives to providing the materials to students manually?

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference how many students were being impacted? Does it make a difference that the teacher was manually providing the correct materials for their class? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

The use of AI can bring many opportunities for teachers and schools, but only if it is used safely, securely and compliantly. At 9ine we have a number of solutions that can support you in doing this, including: 

  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to ensure that their workings are explainable. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards, or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements or configuring the tool for safety. Contact us if you would like to find out more. 
  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels (including modules on Vendor Management) you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS.