9ine Insights | Latest news from 9ine

Turing Trials Scenario 10: When facial recognition gets it wrong

Written by 9ine | Jan 7, 2026 11:42:43 AM

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where we take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 10, which concerns the use of AI-assisted emotional recognition systems in the classroom to assess student emotions. 

The Scenario: ‘A school is using an AI-assisted facial recognition system to infer how happy and engaged students are in lessons, so that those that are classed as disengaged can receive extra lessons. The students are not told that this is happening, nor was their consent obtained. They thought the cameras are just being used for attendance monitoring. A teacher believes that one black student that keeps being classed as disengaged is in fact very engaged in lessons.’

AI-assisted emotional recognition 

In our previous blog, we discussed Turing Trials Scenario 2, where a school was using AI-assisted facial recognition for attendance monitoring. In this Scenario the school is looking to use it for a different purpose of inferring how happy and engaged students are in lessons. 

There might be many motivations for schools to gather more data and information on how engaged students are in lessons, and given the limited time and resources that schools have, AI can seem attractive for a number of reasons. AI can provide real-time, measurable data which leaders may see as more objective than teacher participation. This data can also be aggregated across classes, subjects or time. This data can also help teachers to identify when students appear bored, confused or disengaged, allowing them to adjust lesson pacing or teaching strategies and flag students who need additional support. Emotion-detection tools can also be used as early warning systems, to identify stress, anxiety, or disengagement before problems escalate, supporting schools in their safeguarding responsibilities. This data can also be used by schools to demonstrate to inspectors, boards, or governments that teaching methods are effective, and be used to inform professional development or curriculum changes. 

But despite the potential benefits of these systems, there are serious concerns with using them globally. This was demonstrated back in 2018 when Hangzhou No.11 High School in China received backlash from students and parents. The school was originally using facial recognition to track when students arrived at and left the campus, picked up lunch and borrowed books. They then introduced the technology into the classroom, to track student’s behaviour and read their facial expressions, grouping each face into one of seven emotions: anger, fear, disgust, surprise, happiness, sadness and ‘neutral’. From the dislike of the constant monitoring, to learning how to ‘game the systems’, students complained and parents had concerns over privacy. This reaction led to the systems being paused and the Chinese government stating in 2019 that it planned to ‘curb and regulate’ the use of facial recognition technology and apps in schools. 

Elsewhere, the EU has actually prohibited AI systems that infer emotions of individuals in education institutions (except where the use of the AI system is intended to be put in place for medical or safety reasons) under the EU AI Act. But with vendors such as edmotions.ai framing these AI systems as innovative solutions that support students, teachers and leadership teams, there is a risk that schools might adopt them. To appear technologically advanced, save time and keep pace with perceived innovation trends, at the risk of failing to use them ethically, securely and compliantly.

With the opportunities and risks that AI-assisted emotional recognition can introduce to schools, let’s take a closer look at the Issues, Risks and Safeguards that Scenario 10 presents.  

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are:

  • Lack of Transparency: Relevant individuals were not made aware that personal data would be used in this way or that an AI system is being used at all. It is an important legal requirement for schools to be transparent with individuals about how their personal data is being used. This may also require transparency with parents and guardians about this, particularly where a child is under a certain age or cannot understand themselves the rights that they have over their personal information. Individuals need to be informed about what personal data is being collected and why, what it will be used for and who it will be shared with. Transparency is a core principle of data protection and privacy laws, because it ensures that people understand, trust, and can meaningfully control how their personal data is used, especially in contexts involving power imbalances (like schools). It protects individuals autonomy and dignity, builds trust in institutions and allows individuals to challenge unfair or unlawful practices. In this Scenario it states that ‘The students are not told that this is happening, nor was their consent obtained. They thought the cameras are just being used for attendance monitoring.’  This makes this an Issue that needs to be investigated. 
  • Legal Basis Unclear: There is not a clear reason which allows the school to process personal data in this way e.g. the school does not have the appropriate consent. As well as being transparent with individuals about how their personal data will be processed, schools also need to have a legal basis for doing this (which is the legal reason that allows schools to process personal data). Depending on the type of processing and the country that your school is in, the different types of legal basis that are available to use may vary, and consent may not always be required. However, what is important is that the school understands the legal basis which allows them to process personal data. In this Scenario it states that ‘The students are not told that this is happening, nor was their consent obtained’ meaning that the school will need to investigate whether consent was required for this type of processing in their country, and if so, whether they captured it or if not (and if not, why they didn’t). 
  • Bias and Discrimination: The use of AI may have led to bias or discrimination in relation to an individual or group. Bias is a significant concern with any AI system, as they are trained on existing data which can contain inherent biases. If the training data for the facial recognition system primarily consisted of faces from a specific demographic group which is not representative of the school’s population, then there may be issues of recognising students not from that demographic. When it comes to AI emotion recognition systems, they are also trained to recognise expressions as well as faces. Common problems here can be when they are trained on data which has limited representation of children, or is made up of posed or acted expressions (rather than natural ones). This can mean that the emotional expressions of children may different from those of adults, as well as emotional expressions differing culturally, leading  to individuals being ‘misclassified’. In this Scenario it states that ‘A teacher believes that one black student that keeps being classed as disengaged is in fact very engaged in lessons.’ Knowing this risk about using these AI systems makes this an Issue which would need to be investigated by the school. 

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Transparency: The school makes it clear to individuals that AI is being used and how their personal data will be used with it. This Scenario states that ‘The students are not told that this is happening’ meaning that it is clear that the school has not been transparent about how they are processing personal data. This means that they will need to investigate why this is happening and put this Safeguard in place if they want to continue using the AI system in this way with the student’s personal data. As the school has not been transparent then they will need to be with individuals (which may include the student, as well as parents and guardians) about exactly how personal data will be captured, used and shared in relation to the AI system. This is most likely to be done through a privacy notice communicated to the individuals in a format that is accessible to them. Making sure that schools are transparent with individuals about how personal data is being used is a key requirement for all schools when processing personal data and is a Safeguard that must be in place. If the school has not been transparent with individuals and has used their personal data, they will also have likely been in breach of data protection and privacy laws. This means that they will be under obligations to rectify this and mitigate harm, which may include ceasing to use the data, rectifying or deleting it where possible and potentially reporting this to relevant Regulators. 
  • Legal Basis Confirmed: The school identifies the relevant legal basis which allows them to process personal data with the AI system and completes the necessary steps to use this e.g. capturing the consent of individuals. As well as being transparent with individuals, if the school wants to continue using the AI system, they will need to establish the appropriate lawful basis for this system and follow the necessary steps to rely on it. This will include checking whether they can legally use the AI system (we’ve discussed the fact that these types of systems are prohibited in countries in the EU already, other than in limited circumstances). Where not prohibited, in many countries, consent is required for processing biometric data, which will be required for the use of facial recognition. When required, consent needs to be freely given, specific, informed and unambiguous, and individuals will have the option to withdraw it (which can also be done by the parent or guardian on the child’s behalf in certain circumstances). Schools will also need to be able to evidence that they have collected it. This means that individuals may choose not to consent to the system being used. The school will need to factor this in when they think about the benefits of using it against the general risks, as well as the fact that they may not be able to use it across the whole school. Either way, if the school wants to continue using the AI system, this Safeguard will need to be in place.
  • Bias/ Discrimination Countering: The school takes steps to reduce bias in the AI system including any discriminatory effects of using it e.g. by making training data more diverse or representative. In this Scenario, the school will need to investigate whether the AI system had incorrectly classed the student as disengaged, which there is strong indication for as the teacher very much believes this is the case. They will also need to investigate whether this has impacted any other students and understand if there has been any other misclassifications, why this is happening, and correct it if they want to continue using the system. This correction could be done through introducing more diverse training data sets, regular random reviews of classifications by a human as well as mandatory reviews by a staff member when the classification leads to a decision being made about a student which could negatively impact them. The accuracy of any AI tool should have been verified through vendor vetting and ethics and privacy by design processes prior to use (including to understand whether the school should be legally and ethically using this type of system in the first place). If your school lacks the time or expertise to vet its vendors for AI, then a product like 9ine’s Vendor Management module would have checked these things for you, to highlight any issues and ensure that the appropriate agreements were in place with the vendor to protect the school if the fault is from their end. The misclassification of student’s emotions through these types of systems can lead to serious harms such as unfair disciplinary actions, anxiety and distress due to constantly being monitored, a loss of trust in teachers and the school and unfair labelling and stigmatisation. This is why it is a must to have this Safeguard in place.

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Process Not Followed and Safeguards: Repeat or Complete a Process: We’ve mentioned the fact that this Scenario could have been avoided if a privacy by design, ethics by design and vendor management process had been used. How do you think these processes might have helped to prevent this Scenario? Are you confident that your school has these in place and is using them effectively for all AI systems?
  • Issues: Re-purposing of Personal Data and Safeguards: Purpose Review: The Scenario mentions that the students thought that the cameras were just being used for attendance monitoring. Does this mean that personal data has been re-purposed here? What Safeguards should the school have put in place if they wanted to do this?

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference that only one student so far has potentially been classified incorrectly? Does it make a difference that this system is being used for emotional recognition rather than just attendance monitoring? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

We’ve discussed a lot in this Scenario that schools need to consider when using AI, from data protection and privacy concerns to vetting vendors and completing ethics and privacy by design processes. If anything in this Scenario makes you think that your school needs more support with the complexities of using AI safely, securely and compliantly, at 9ine we have a number of solutions which can help you. These include: 

  • Academy LMS: If anything in this Scenario makes you think that your school needs to improve its AI literacy (from understanding the risks of using AI to how to update your existing processes and policies for use of it), 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 
  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to ask vendors about bias and their compliance with the laws that your school is subject to. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards, or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements or configuring the tool for safety. Contact us if you would like to find out more.