9ine Insights | Latest news from 9ine

Turing Trials Scenario 7: Trust at Risk: What happens when a school AI chatbot shares the wrong information

Written by 9ine | Dec 19, 2025 8:28:43 AM

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 7, which concerns the importance of human oversight, explainability and teacher awareness when the school is using an AI chatbot. 

The Scenario: ‘A school provides an AI chatbot for parents and guardians to use when they have questions about the schools policies, closure days etc. A parent says to a teacher that they think the AI chatbot is giving outdated policy information. The teacher was not aware that the school was using an AI chatbot. The school also has no way to check the information that the AI is providing. 

AI Chatbots in Schools

In our blog Turing Trials Scenario 4: The opportunities and risks of using chatbots and virtual mentors in education, we discussed the benefits and risks for students and schools of using AI-powered chatbots and virtual mentors. In that Scenario, students were using virtual mentors for help in the classroom, but chatbots can be used for other purposes at the school too. Whilst the terms are often used interchangeably, there is a difference between chatbots and virtual mentors (or tutors). Chatbots are general-purpose AI tools, designed to hold conversations, source and summarise information, and respond to questions in a human-like way. They are often used for information, administrative support or basic learning help. Virtual mentors/tutors on the other hand, are more education-focused and structured, often aligned with the curriculum and designed specifically to support learning. 

AI Chatbots are already being used by major retailers, to provide customer support and provide personalised shopping experiences, but there are opportunities for schools to realise the benefits of them too, particularly when it comes to communicating with parents and guardians. Schools can use chatbots to: 

  • Automate parent enquiries and FAQs: Schools can deploy AI chatbots to answer routine questions from parents and guardians about school hours and term dates, admissions and enrollment processes, contact details for staff etc. 
  • Provide multilingual support for families: Some chatbots can respond in multiple languages, helping parents (who may not have English as their first language) get clear information and support to improve accessibility for diverse school communities
  • Engage with parents and provide updates: Chatbots can be configured to share notices about parent–teacher meetings, reminders about events, deadlines or school trips and general updates or newsletters; and 
  • Triage and escalation of enquiries: Chatbots can answer simple queries, and then route more complex questions (e.g. specific child support needs) to the appropriate human staff member, generating a summary of the parent’s question so staff don’t have to start from scratch

As chatbots can be available 24/7, parents and guardians can get instant answers without waiting for office staff. But despite the benefits of using chatbots, there are some risks which schools need to consider: 

  • Accuracy and reliability: Chatbots can provide incorrect, misleading, or incomplete information, which may mean that individuals relying on these answers (without verification) may make poor decisions
  • Data privacy and security: Chatbots often process personal data, which may include sensitive data (for example about a student’s absence due to sickness). This means that the usual data protection and privacy risks of unauthorised access, data breaches and misuse of personal information can arise, especially where the chatbot is provided by a third party vendor 
  • Safeguarding and inappropriate content: Some AI chatbots have been known to generate unsafe or inappropriate content (either intentionally or accidentally)
  • The digital divide: Not all students or families may have equal access to the technology required to use these chatbots. This can increase inequalities if AI chatbots are heavily relied on, particularly where schools choose to use these instead of more traditional communication channels; and   
  • Miscommunication and misunderstanding: Chatbots may misinterpret questions, or provide vague responses. Parents, students or staff can then become confused or frustrated, potentially affecting trust in their school 

With the opportunities, but also the risks of using AI chatbots in schools, let’s take a closer look at the risks, issues and safeguards associated with Turing Trials Scenario 7! 

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are: 

  • Inappropriate Interactions: The AI provides individuals with inappropriate or incorrect content or interactions. We’ve mentioned the fact that there is a risk of accuracy and reliability when using AI chatbots, and that they can provide incorrect, misleading or incomplete information. If the data the AI chatbot is trained on contains errors, outdated information, or biases, the AI can repeat those mistakes. Chatbots also generate responses based on patterns, not comprehension. They don’t ‘know’ facts, or verify information, so they can confidently provide wrong answers. They also have limitations in reasoning and context, meaning that they can omit important details, draw incorrect conclusions or ‘hallucinate’, and fabricate information which sounds plausible but is false. If individuals use questions that are unclear or ambiguous, then the chatbot may guess what the individual meant. In this Scenario it states that ‘A parent says to a teacher that they think the AI chatbot is giving outdated policy information’. Knowing that this can happen with AI chatbots, the school would need to investigate this Issue, to find out whether the policy information provided is outdated. 
  • Lack of Human Oversight: Humans do not have the ability, or have not used the ability, to provide oversight of how the AI system is working (not including the decisions made by it about an individual). Because of the risks associated with using AI chatbots, people need to be actively involved in monitoring, guiding, or controlling the AI system, to ensure that it behaves safely, fairly, and as intended. Human oversight is about people staying in control of how the AI is used and stepping in when needed, rather than letting the AI act entirely on its own. In this Scenario, it states that ‘The school also has no way to check the information that the AI is providing.’ when ‘A parent says to a teacher that they think the AI chatbot is giving outdated policy information.’. Because the school cannot check the information that the AI is providing, there is no way for them to verify whether the AI system was working as intended or not, making this an Issue that needs to be investigated. 
  • Lack of Training/Awareness: An individual has acted in a way that indicates that they are not aware of the risks of AI or how to use an AI System. With the risks that AI can introduce to schools, it is important that everyone at the school has the appropriate level of AI literacy in line with their role and responsibilities. AI literacy involves teaching people to understand, use, and critically evaluate AI systems safely and effectively. In a school context, it’s about giving staff, students (and sometimes parents) the knowledge and skills to use AI responsibly. In this Scenario, it states that ‘The teacher was not aware that the school was using an AI chatbot.’, even though the parent was clearly aware that the school was using one. Whilst the teacher in this Scenario may not have been responsible for procuring the AI chatbot, or providing human oversight of it, they should have been aware that this was being used by the school and with the wider school community. 

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Human Oversight: The school makes sure that a human monitors how the AI system is working, to ensure that appropriate content is provided and that the system is working as planned (not including the decisions that it makes about individuals). Human Oversight is about people staying in control of how the AI is used and stepping in when needed, meaning that schools need to make sure that individuals at the school have the ability to (and actually) do this in practice. By putting this Safeguard in place, schools could have assigned roles and responsibilities to individuals at the school for human oversight, and ensured that they had the necessary expertise to provide it. This expertise would include understanding how this specific AI system works, its limitations, and how to intervene safely if an incident occurs. With this Safeguard in place then the teacher would have been able to verify whether the system is providing outdated policy information, or would have known who to contact at the school to do this. The school could have also been proactively providing human oversight of the AI system, ensuring that the data that it is trained on is kept up to date, including copies of updated policies.
  • Explainable AI: The school makes sure that the AI system can provide clear and understandable explanations for its operations, decisions and outputs so that individuals can comprehend and explain how the AI system reached a particular conclusion. Explainability of the AI system goes beyond the ability for the school to check the information that the AI system provided, it is about understanding exactly how and why an AI system produced a particular output. Schools should only use AI systems that have features built in which provide clear and understandable explanations of their operations, decisions and outputs. This will include features that explain what information it relied on, and the patterns of reasoning it followed, when acting in a certain way. As most AI tools that schools use are provided by third party EdTech vendors, it will be an important part of the vendor vetting process to ensure that an AI system can do this. If the school had made sure that the AI was explainable, and the AI system had provided outdated policy information, they would have been able to understand why it did this. It could have been that the school had not included the updated policy in the training data for the AI system, or that the parent had asked to see a previous version of the policy. 
  • Training/Awareness Activities: A school provides training and awareness-raising activities to relevant individuals, including on how AI systems work, should be used and what the limitations and potential risks are of using AI. If the school in this Scenario provided training and awareness-raising activities at the school, ensuring that all individuals had the appropriate level of AI Literacy for their role and responsibilities, then it would be less likely that it would have occurred. If the individual(s) at the school responsible for procuring the AI chatbot had the appropriate level of AI literacy, then they would understand the importance of vetting vendors and should have ensured that the AI had explainability built in. With the appropriate level of AI literacy, schools should have also made sure that individuals at the school provided human oversight of the AI system, to make sure that it was trained on relevant data and was working as expected throughout the time that the school uses it. The school would have also understood the importance of all staff at the school having an awareness of the AI systems that the school was using, even if they were not responsible for these, or expected to interact with them directly. This awareness should have been a communication to all staff, potentially with FAQs to help them respond to questions from students, parents and guardians. This would have made sure that the teacher in this Scenario was aware of the chatbot being used and could respond to (or escalate) the parent’s query, maintaining the parent’s confidence that the school is using AI responsibly.

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Process Not Followed and Safeguards: Repeat or Complete a Process. Would it have made a difference if it had been a requirement of the Vendor Management Process to ensure that the AI system would allow the school to check its outputs? If the school had followed an ethics by design process, could it have made a difference in this Scenario? 
  • Issues: Lack of Transparency and Safeguards: Transparency. If the school was not able to check the information that the AI was providing, and there were potential gaps in the Vendor Management Process, would the school have been able to be fully transparent with parents and guardians about how any personal data used by the AI chatbot would be processed? 
  • Issues: Lack of Equity and Safeguards: Equitable Use. Could the school expect that all parents had access to the technology which would enable them to use the chatbot? Would schools need to investigate this, and ensure that they continued to make information available in another way? 

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference on the particular outdated policy information that was provided? Would it matter if the parent had taken actions or made a decision based on outdated policy information? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

Navigating the increasing and evolving risks of AI can be difficult for schools, but at 9ine we have a number of solutions that can support you, these include: 

  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to ensure that their workings are explainable. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards, or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements or configuring the tool for safety. Contact us if you would like to find out more. 
  • Application Library: Application Library is a solution that enables all staff to access a central searchable library of all EdTech in the school. The library contains all information staff need to know about the AI in use (if there is), privacy risks, safeguarding risks and cyber risks. With easy to add ‘How to’ and ‘Help’ guides, Application Library becomes a single, central digital resource. This is where information, FAQs and policies about the AI chatbot could be stored, so that the teacher could have responded to queries from parents. Through implementing Application Library your school can also identify duplication in EdTech, reduce contract subscription costs and have a workflow for the request of new EdTech for staff to follow. Contact us if you would like to find out more. 
  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels (including modules on Vendor Management) you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS.