Skip to the main content.

10 min read

Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring

Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring
Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring
19:29

Join 9ine for the second of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 2, discussing the risks, issues and safeguards associated with the use of facial recognition to monitor attendance. 

In our previous blog, we announced the launch of ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website. In this blog, we will take you through Scenario 2, which concerns the use of AI (and in particular facial recognition) for attendance monitoring in schools. 

The Scenario: ‘A school wants to save time on attendance monitoring, so it decides to use facial recognition to monitor when students arrive at school. The school notifies parents when their children do not attend, to which a number of parents complain that they were not aware that this was being done and that they did not provide consent. The school also notices that a high proportion of the individuals marked as absent are from ethnic minorities. 

Facial Recognition for Attendance Monitoring

Attendance monitoring in schools is crucial, from the safeguarding of students, to meeting legal requirements or for the tracking of academic progress and helping to identify early needs for intervention. But, for some schools, it can be a great administrative burden which can take time to complete and can result in data accuracy errors, where paper registers get lost or in busy classrooms where teachers may not notice late arrivals or early departures. 

These issues have led some schools to trial facial recognition to monitor attendance. For example, in India the Department of School Education and Literacy in Karnataka announced its plans to start using mobile phone-based facial recognition to mark attendance and track the beneficiaries of government welfare schemes (such as free midday meals). Facial recognition for attendance has also been trialled for schools in Australia and the US. But these plans have often been met with concerns over data misuse, exploitation and abuse. 

Whether your school or school board is considering the use of facial recognition for attendance monitoring or not, it is important to understand the risks associated with it prior to implementation. This is why 9ine have created this Scenario for you to have discussions amongst staff and students about the risks and issues associated with its use, and what safeguards you would need to put in place if you intend to use it (but also about the risks and issues of facial recognition more broadly). 

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are: 

  • Lack of Transparency: Relevant individuals were not made aware that personal data would be used in this way or that an AI system is being used at all. It is an important legal requirement for schools that they are transparent with individuals about how their personal data is being used. This may also require transparency with parents and guardians about this, particularly where a child is under a certain age or cannot understand themselves the rights that they have over their personal information. Individuals need to be informed what personal data is being collected and why, what it will be used for and who it will be shared with. In this Scenario, it states that ‘a number of parents complain that they were not aware that this was being done’, meaning that schools will need to investigate whether they were transparent about how the children’s personal data was being used for attendance monitoring, both to the individuals and their parents or guardians. 
  • Legal Basis Unclear: There is not a clear reason which allows the school to process personal data in this way e.g. the school does not have the appropriate consent. As well as being transparent with individuals about how their personal data will be processed, schools also need to have a legal basis for doing this. Depending on the type of processing and the country that your school is in, the different types of legal basis that are available to use may vary, and consent may not always be required. However, what is important is that the school understands the legal basis which allows them to process personal data. In this Scenario, it states that ‘...to which a number of parents complain that they were not aware that this was being done and that they did not provide consent’, meaning that schools will need to investigate whether consent was required for this type of processing in their country and if so, whether they captured it. In fact, in many countries, consent is required for processing biometric data, which will be required for the use of facial recognition. When required, consent needs to be freely given, specific, informed and unambiguous, and individuals will have the option to withdraw it (which can also be done by the parent or guardian on the child’s behalf in certain circumstances). This means that in this case if parents are complaining that they did not provide consent, this is an issue that schools will need to investigate further. 
  • Bias and Discrimination: The use of AI may have led to bias or discrimination in relation to an individual or group. Bias is a significant concern with any AI system, as they are trained on existing data which can contain inherent biases. If the training data for the facial recognition system primarily consisted of faces from a specific demographic group which is not representative of the school’s population, then there may be issues of recognising students not from that demographic. For example, when New York’s Lockport City School District used third party vendor SN Technologies facial recognition system, it was found that SN Technologies misled the district about the accuracy of the algorithm it used and downplayed how often it misidentified black faces. In this Scenario, it states that ‘The school also notices that a high proportion of the individuals marked as absent are from ethnic minorities’. This means that (knowing the issues of bias with AI) the school would need to verify whether these absences were correct, and if not, whether this was due to bias (and a lack of accuracy in the AI tool being used). 

What Safeguards might a school use for this Scenario?

Similar to the Issues cards, Turing Trials has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario. 

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Transparency: The school makes it clear to individuals that AI is being used and how their personal data will be used with it. Once the school has investigated the complaints from parents about the fact that they ‘were not aware that this was being done’ they will likely either find that either a) they were not transparent, b) they were transparent, but not transparent enough or c) that they were transparent, and can remind parents of how they did this. If the school has not been transparent, or not transparent enough, then they will need to be with individuals (which may include the student, as well as parents and guardians) about exactly how personal data will be captured, used and shared in relation to attendance monitoring and document this. This is most likely to be done through a privacy notice communicated to the individuals in a format that is accessible to them. If the school believes that it has been transparent, they will need to respond to the parents that complained to say they were not aware to highlight exactly how and when the school was transparent about this processing. They may also want to communicate with all relevant individuals again about how personal data will be used for attendance monitoring purposes. Whatever the findings, making sure that schools are transparent with individuals about how personal data is being used is a key requirement for all schools when processing personal data and is a safeguard that must be in place. 
  • Legal Basis Confirmed: The school identifies the relevant legal basis which allows them to process personal data with the AI system and completes the necessary steps to use this e.g. capturing the consent of individuals. Once the school has investigated the complaints from parents about the fact that ‘they were not aware that this was being done and that they did not provide consent’, they are likely to find that either a) consent was not captured, b) the consent captured was insufficient, c) that consent was captured, or d) that consent was not the legal basis that they were relying on to use the student’s personal data for attendance monitoring. If consent was not captured, or was insufficient, then the school will need to capture this to use personal data in this way unless it is not the legal basis that the school is relying on. But remember, consent will often be required for biometric processing, it is optional, and can be withdrawn at any point. If consent can be relied upon and was appropriately captured then schools will need to evidence this to the parents who may choose to withdraw it, particularly as they believe there was a lack of transparency and due to the concerns of using facial recognition. This would mean that the school would not be able to use personal data in this way. If consent is not the legal basis that the school is relying on, then they will need to confirm that the legal basis they are relying on is correct (which should have been done through a Privacy Impact Assessment, prior to deployment of the facial recognition system) and communicate back to the parents why consent is not required as well as the legal basis they are relying on. Identifying the legal basis which allows schools to process personal data and making sure all necessary steps to rely on it have been completed is a key safeguard that schools must have in place to ensure that they are legally compliant and that they are protecting the personal data and privacy of their students. 
  • Bias/Discrimination Countering: The school takes steps to reduce bias in the AI system including any discriminatory effects of using it e.g. by making training data more diverse or representative. Once the school has investigated noticing ‘that a high proportion of the individuals marked as absent are from ethnic minorities’, they are likely to find that either a) that the absences were recorded correctly or b) there is an issue with the accuracy of the AI tool and that it has inaccurately been marking them as absent. If the absences were recorded correctly then no further action on this particular point will be required, however if the absences were recorded incorrectly then this is a serious issue. The accuracy of any AI tool should have been verified through vendor vetting and ethics and privacy by design processes prior to use. If your school lacks the time to do this, then a product like 9ine’s Vendor Management module would have checked these things for you, to highlight any issues and ensure that the appropriate agreements were in place with the vendor to protect the school if the accuracy fault is from their end. Incorrect attendance monitoring can lead to serious legal (non-compliance with attendance monitoring requirements and data privacy laws) as well as safety issues (for example if there is an emergency in the school and all students need to be accounted for). This means that ensuring that there is bias/discrimination countering in any AI system being used is a key safeguard that must be in place.

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose. You may have also chosen: 

  • Issues: Process Not Followed and, Safeguards: Repeat or Complete a Process: Do you think that if your school could have been certain that all processes had been followed correctly (e.g. privacy by design, ethics by design and vendor management etc.) that they could be have been sure that the issues on transparency, consent and bias/discrimination would have already been safeguarded against before beginning to use the AI system? 
  • Issues: Lack of Training/Awareness and, Safeguards: Training/Awareness Activities: Do you think that if your school could have been certain that all staff and students had received the appropriate training and awareness on how the school was using AI and how to do that in the appropriate way that parents would be making these complaints and that the AI tool could have been introduced without all of the safeguards required to be in place? 
  • Issues: Lack of Human Intervention and, Safeguards: Human in the Loop: Do you think that if there had been regular checks by humans that could have highlighted any potential accuracy issues whilst the tool was being trialled and during its ongoing use that this would have highlighted any issues of bias/discrimination?

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions (for example, would it have made a difference if this was a different type of AI tool that was not using biometric information or if that a high proportion of individuals marked as absent were not from ethnic minorities?). At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help? 

There is no doubt that AI brings many opportunities to schools, but it needs to be implemented safely, securely and compliantly. This is particularly in the case of facial recognition, but we have discussed issues and safeguards which are applicable to all uses of AI in education. At 9ine we offer a number of solutions that can help schools to ensure the safe, secure and compliant use of AI, these include: 

  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, and a particular case where it was the vendor that presented risks to the school. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards or highlights the safeguards and mitigations that schools need to put in place. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements, configuring the tool safely, or ultimately deciding that the use of facial recognition is just too high of a risk for the school to accept. Contact us if you would like to find out more. 
  • Academy LMS: We’ve highlighted the importance of having the appropriate Training and Awareness Activities on AI at your school. If you think your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 

Join us for the next installment of Turing Trials Walk-Throughs where we will take a look at Scenario 3, which looks into the risks, issues and safeguards associated with using AI systems to automatically review applicant CVs at your school. 

Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...

Read More
AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

Introducing ‘Turing Trials Walk-throughs’, our weekly guide between now and the end of 2025, which takes a look at each of the Scenarios in Turing...

Read More
AI in Education: AI Literacy and Why Human in the Loop Matters More Than Ever

AI in Education: AI Literacy and Why Human in the Loop Matters More Than Ever

Whether your school is just beginning its Artificial Intelligence (AI) journey, or already experimenting with AI tools, building strong AI literacy...

Read More