Skip to the main content.

10 min read

Turing Trials Scenario 9: Unequal access, unequal advantage

Turing Trials Scenario 9: Unequal access, unequal advantage
Turing Trials Scenario 9: Unequal access, unequal advantage
19:42

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where we take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 9, which concerns the potential for AI to increase the digital divide in schools. 

The Scenario: ‘A teacher realises that some students have access to a paid-for AI system which the school does not provide access to, but that supports them in checking the grammar of their work and recommending learning materials, resulting in improved academic success. The teacher consults with the leadership team who say that the school does not have the budget to make this tool available to all students and there is nothing in the Safeguarding Policy which requires them to do so.’

Safeguarding students in the age of AI 

Safeguarding refers to the measures and responsibilities in place for schools to protect children and young people from harm, abuse, and neglect, and to promote their overall wellbeing. It goes beyond simply responding to abuse, it’s about creating a safe environment where students can learn, develop, and thrive. The increasing use of AI in education can bring many opportunities to schools, but there are specific risks to safeguarding that must be carefully managed. These include the potential exposure to inappropriate or harmful content, risks of bias and discrimination, data privacy and misuse concerns, and an over-reliance on AI by students for emotional and well-being support. 

In schools, equity and safeguarding are closely connected. Equity means ensuring that all students have fair access to learning opportunities, resources, support, and guidance, so that they can reach their full potential, regardless of their background, ability, or personal circumstances. Equity and safeguarding are closely connected, because safeguarding is not just about preventing harm, it’s also about ensuring all students have fair access to support, resources, and opportunities that protect their well-being. AI can be an issue for equity in schools because, if used without careful oversight, it can reinforce existing inequalities and limit opportunities for some students. From the use of biased training data, which often reflects existing patterns of inequality, to fixed ability pathways which can reinforce the attainment gap and also issues of access to technology, where students with less access to devices, internet, or supportive home environments may benefit less from AI tools. This last issue is referred to as the ‘digital divide’, and has long been an issue before the advent of AI, but is one which only gets exacerbated by students not having access to this technology. Unequal access to resources can lead to educational disadvantage, social exclusion, or increased vulnerability, meaning that disadvantaged students may face greater risks to their wellbeing.  

With the impact of AI on safeguarding and equity in schools, let’s look at the risks, issues and safeguards that Scenario 9 creates. 

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are:

  • Lack of Equity: Students at a school have unequal access to the benefits of AI. In order to use AI effectively, students will at least need to have access to basic technology infrastructure. They will need to have access to a computer or laptop, tablet or smart phone that is reliable, compatible with software, and able to handle AI applications. They will also need to have stable and secure wi-fi access, and access to applications that provide AI functionality e.g. chatbots, virtual tutors and content creation or data analysis tools. In this Scenario a ‘teacher realises that some students have access to a paid-for AI system which the school does not provide access to, but that supports them in checking the grammar of their work and recommending learning materials’. This means that there is a lack of equity at the school, because not all students have access to the same resources, as some cannot afford the paid-for AI system. Whilst schools are not responsible for ensuring identical access outside of the school environment, they are responsible for fairness, and being aware of and addressing inequalities when it comes to resources. The Scenario also states that access to this paid-for AI tool has resulted in ‘improved academic success’. This means that this is an Issue that schools will need to investigate further, to understand how much this lack of access is having an impact on these students reaching  their full potential, due to their background, ability, or personal circumstances 
  • Bias and Discrimination: The use of AI may have led to bias or discrimination in relation to an individual or group. Whilst AI systems can be biased themselves, due to how they are designed, trained, and used, in this Scenario, bias and discrimination is an issue because of structural and socio-economic bias. Structural bias is where ingrained prejudices in school systems, policies, and cultures disadvantage certain groups through unequal resources and opportunities, creating systemic barriers to success (rather than being due to an individual’s failings). For example, this might occur where a school policy allows the use of paid-for AI tools without providing alternatives. Socio-economic bias in education is where students from lower income backgrounds face systemic disadvantages due to fewer resources, contributing to widening achievement gaps and perpetuating cycles of poverty. For example, where students from higher-income families can afford paid-for AI tools which provide them with an academic benefit, while others cannot. Whilst linked, the difference between these forms of bias is that structural bias is about systems and rules that produce unequal outcomes, whereas socio-economic bias is about beliefs and behaviours related to income and class. The fact that some students appear to be disadvantaged in this Scenario makes this an Issue that the school will need to investigate to understand whether there is structural or socio-economic bias occurring. 
  • Lack of Policy: There is not an appropriate policy, or aspect of a policy in place which governs the use of AI or personal data. Policies (or aspects of policies) on equity in schools are important because they set clear expectations, guide fair decision-making, and help ensure that all students have access to the support and opportunities they need to succeed, regardless of background or circumstance. Without a clear policy (or aspect of a policy) on equity, a school risks creating or allowing unfair, inconsistent, and potentially harmful outcomes for students. These effects may be unintentional, but they can be serious and long-lasting. A lack of a policy on equity can lead to unequal access to resources and support, widened attainment gaps, inconsistent decision-making, and increased risk to the wellbeing and safeguarding of students. In this Scenario it says that ‘the school does not have the budget to make this tool available to all students and there is nothing in the Safeguarding Policy which requires them to do so’. Lack of policy makes it harder to demonstrate commitment to inclusion and fairness, making this an Issue that needs to be investigated, to look further into whether the school should have a policy on this, rather than simply accepting that the school does not need to do anything because there is not one. 

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Policy Introduction/Update: The school introduces a new policy or updates an existing one to govern how AI systems should be used compliantly and ethically. The school in this Scenario could introduce an aspect of the Safeguarding (or other) Policy which covers equity (or review whether their current policies are fit for purpose when it comes to access to AI tools), to ensure that all students are supported fairly and safely, with no child being disadvantaged in a way that could affect their wellbeing. The school should include an explicit statement on equitable access in a policy e.g. ‘The school is committed to ensuring all pupils have equitable access to learning resources and support.’. The policy should also reference inclusion and diversity, ensuring that SEND, EAL, or socio-economically disadvantaged students receive appropriate support. Having such a policy in place would mean that the school’s leadership team could not simply state that ‘there is nothing in the Safeguarding Policy which requires them to do so’ and would have to investigate whether this Scenario was leading to inequitable access to learning resources and support, taking action if they found that students were being disadvantaged. This statement would not necessarily have to be in the Safeguarding Policy, but including statements in this policy on fair access, inclusion, and support strengthens the policy and helps to protect students both academically and emotionally.
  • Training/Awareness Activities: A school provides training and awareness-raising activities to relevant individuals, including on how AI systems work, should be used and what the limitations and potential risks or harms are of using AI. Training helps staff to understand that unequal access is not always obvious, particularly when it comes to the use of AI tools. Without training, teachers may assume that all students can use AI tools at home and may even set assignments which rely on high-speed internet access or newer devices. Training encourages educators to consider who can access AI tools outside of the school, and what to do if a student lacks access to these, allowing schools to react quickly where inequality arises or to plan alternatives to prevent it from happening, meaning that it is less likely that students will be disadvantaged. For example, in this Scenario, if the school could not make certain AI tools available to all students, they could set alternative assignments where these tools cannot be used. If schools are unaware of the benefits of AI, the resources and skills needed to use it, or the risks that it can pose to students, then they may not be able to do this. Training also supports consistent and equitable implementation of standards and policies. Without training, other teachers at the school may not have spotted that students had access to a paid-for AI tool (or that it was leading to improved academic success), meaning that raising and addressing this issue may have depended on which teacher a student had. Whereas, training could ensure that all staff had this awareness, to ensure that access becomes more consistent and less dependent on chance. Ongoing training also promotes a culture of equity, where educators reflect on who benefits from AI (and who does not), prompting them to collect feedback from students and families and ensuring that practices are adjusted when inequities appear. It can also help schools to make more equitable choices when it comes to informing policy and making purchasing decisions. With training they can be aware that they need to ask vendors about bias, data sources and accessibility, meaning that equity is built into AI adoption, not added on later. AI literacy is also important for students themselves, making sure that all students learn how to use AI tools effectively and safely and that this is embedded in class time, not assumed from the home experience. This is especially important because students from higher-income families are more likely to already use AI tools, which can widen achievement gaps. Providing them with education on AI means that students can gain equal opportunity to benefit from AI (regardless of background), understand which AI tools the school has to support them and how to use and access these safely. 
  • Equitable Use: The school takes steps to ensure that the benefits of AI are equally available to everyone at the school. In addition to ensuring that the school has a policy (or aspect of a policy) on equity which covers the use of AI tools in the education environment, and providing training to staff and students on AI to ensure equitable use, there are other steps that the school can take to ensure that the benefits of AI are equally available to everyone at the school. They can guarantee a baseline access to AI tools for all students by providing school-managed devices that can run required AI tools, offer on-campus access to AI tools during the school day and avoid making home access a requirement for graded work. Schools can also monitor for bias and unequal impact of the use of AI tools in the school environment, by tracking which students use AI tools and who benefits most, looking for patterns of academic progress and AI use by home income level, language background, disability or race and adjust implementation if certain groups are excluded or harmed. Schools could also make sure that they gather feedback from students and families about access and accessibility, to understand whether access to AI tools is equitable, or what more schools could possibly be doing to support equitable access. Equity improves when decisions reflect real-life experiences.  

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Lack of Training/Awareness: Whilst we have discussed the importance of this as a Safeguard that could be applied, should this also have been one of the top three Issues that was highlighted? Would you have selected this as more important than the other three Issues that we have highlighted? 
  • Safeguards: Bias/ Discrimination Countering: Whilst we have discussed this as an Issue, are there steps that the school should also take in relation to bias and discrimination countering that may be different to the ones used to meet the Safeguards card of Equitable Use?

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference how many students did not have access to this tool? Does it matter that it was only helping students to improve their grammar? Is this something that the school could have supported without the use of the AI paid-for tool? Would it matter if the AI tool was supporting students in other areas of academic ability? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

We’ve discussed the importance of equitable access to software, hardware and AI tools themselves for students to be able to realise the benefits of AI, and why this is part of a school’s safeguarding responsibilities. AI becomes equitable in schools not through technology alone, but through thoughtful, inclusive human decisions. If anything in this Scenario has made you think that your school needs further support in this area, at 9ine we have a number of solutions that can help you. These include: 

  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 
  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to ask vendors about bias, data sources and accessibility. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards, or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements or configuring the tool for safety. Contact us if you would like to find out more. 
Turing Trials Scenario 8: Locked in by the algorithm: How AI can reinforce classroom inequality

Turing Trials Scenario 8: Locked in by the algorithm: How AI can reinforce classroom inequality

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...

Read More
Turing Trials Scenario 6: The importance of sustainable AI in schools

Turing Trials Scenario 6: The importance of sustainable AI in schools

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...

Read More
Turing Trials Scenario 7: Trust at Risk: What happens when a school AI chatbot shares the wrong information

Turing Trials Scenario 7: Trust at Risk: What happens when a school AI chatbot shares the wrong information

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...

Read More