Turing Trials Scenario 8: Locked in by the algorithm: How AI can reinforce classroom inequality
Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...
Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where we take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 10, which concerns the use of AI-assisted emotional recognition systems in the classroom to assess student emotions.
The Scenario: ‘A school is using an AI-assisted facial recognition system to infer how happy and engaged students are in lessons, so that those that are classed as disengaged can receive extra lessons. The students are not told that this is happening, nor was their consent obtained. They thought the cameras are just being used for attendance monitoring. A teacher believes that one black student that keeps being classed as disengaged is in fact very engaged in lessons.’
In our previous blog, we discussed Turing Trials Scenario 2, where a school was using AI-assisted facial recognition for attendance monitoring. In this Scenario the school is looking to use it for a different purpose of inferring how happy and engaged students are in lessons.
There might be many motivations for schools to gather more data and information on how engaged students are in lessons, and given the limited time and resources that schools have, AI can seem attractive for a number of reasons. AI can provide real-time, measurable data which leaders may see as more objective than teacher participation. This data can also be aggregated across classes, subjects or time. This data can also help teachers to identify when students appear bored, confused or disengaged, allowing them to adjust lesson pacing or teaching strategies and flag students who need additional support. Emotion-detection tools can also be used as early warning systems, to identify stress, anxiety, or disengagement before problems escalate, supporting schools in their safeguarding responsibilities. This data can also be used by schools to demonstrate to inspectors, boards, or governments that teaching methods are effective, and be used to inform professional development or curriculum changes.
But despite the potential benefits of these systems, there are serious concerns with using them globally. This was demonstrated back in 2018 when Hangzhou No.11 High School in China received backlash from students and parents. The school was originally using facial recognition to track when students arrived at and left the campus, picked up lunch and borrowed books. They then introduced the technology into the classroom, to track student’s behaviour and read their facial expressions, grouping each face into one of seven emotions: anger, fear, disgust, surprise, happiness, sadness and ‘neutral’. From the dislike of the constant monitoring, to learning how to ‘game the systems’, students complained and parents had concerns over privacy. This reaction led to the systems being paused and the Chinese government stating in 2019 that it planned to ‘curb and regulate’ the use of facial recognition technology and apps in schools.
Elsewhere, the EU has actually prohibited AI systems that infer emotions of individuals in education institutions (except where the use of the AI system is intended to be put in place for medical or safety reasons) under the EU AI Act. But with vendors such as edmotions.ai framing these AI systems as innovative solutions that support students, teachers and leadership teams, there is a risk that schools might adopt them. To appear technologically advanced, save time and keep pace with perceived innovation trends, at the risk of failing to use them ethically, securely and compliantly.
With the opportunities and risks that AI-assisted emotional recognition can introduce to schools, let’s take a closer look at the Issues, Risks and Safeguards that Scenario 10 presents.
Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are:
Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario.
The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are:
Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen:
As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions. Does it make a difference that only one student so far has potentially been classified incorrectly? Does it make a difference that this system is being used for emotional recognition rather than just attendance monitoring? At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why?
We’ve discussed a lot in this Scenario that schools need to consider when using AI, from data protection and privacy concerns to vetting vendors and completing ethics and privacy by design processes. If anything in this Scenario makes you think that your school needs more support with the complexities of using AI safely, securely and compliantly, at 9ine we have a number of solutions which can help you. These include:
Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...
Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...
Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the...