Skip to the main content.

9 min read

Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs

Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs
Turing Trials Scenario 3: Why explainability and human oversight matter when AI screens CVs
17:06

Join 9ine for the third of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 3, discussing the risks, issues and safeguards associated with the use of AI to automatically review applicant CVs, including the importance of the Explainability of AI and having a ‘Human in the Loop’ 

Welcome to our next installment of the ‘Turing Trials Walk-Throughs’, where between now and the end of the year, we will take you through each of the ten scenarios we currently have for Turing Trials, to discuss some of the risks, issues and safeguards you will need to consider for each. If you haven’t downloaded it already, Turing Trials is available to download for free from our website, including full instructions on how to play the game. In this blog, we will take you through Scenario 3, which concerns the use of AI to automatically review applicant CVs in schools and why explainability and having a ‘human in the loop’ is so important. 

Turing Trials Scenario 3 - Using AI to automatically review applicant CVs

‘A school uses an AI system to review applicant CVs, and automatically rejects certain ones as not appropriate for the school. One applicant asks for an explanation of how the school reached the decision to reject them. The school does not understand how the AI system works and so cannot explain this to the individual and upholds the rejection without a human review.’ 

Using AI to review applicant CVs 

One of the key benefits for schools when it comes to the use of AI is efficiency. AI can be used to reduce workloads, improve accuracy, streamline operations and support better decision-making. Whilst the exact number of applications and CVs schools will receive for a role varies, it can take a significant amount of time to manually sift through them (particularly where they begin to creep into the hundreds per role). When it comes to helping with recruitment in schools, AI can be used to:

  • Automate the screening of large numbers of applications: AI can automatically scan CVs and Resumes to identify required qualifications, check for teaching certifications, spot minimum experience etc. 
  • Highlight key skills and experience: AI can extract and highlight information such as safeguarding experience, subject knowledge, behaviour management strategies etc. 
  • Flag potential concerns or inconsistencies: AI can identify unexplained employment gaps, missing required qualifications or conflicting dates and incomplete information
  • Reduce unconscious bias in early screening: AI can remove identifying details and focus only on skills and experience during initial screening phases to help ensure that shortlisting is fairer and more consistent 

But, whilst there are a number of benefits to be achieved by using AI for recruitment in schools, it has to be used safely, securely and compliantly to realise them. Let’s take a closer look at Scenario 3 and the risks, issues and safeguards that schools need to consider when looking to use an AI system to review applicant CVs. 

What ‘Issues’ does this Scenario present? 

Turing Trials currently has fifteen Issues cards, and it is the role of the group playing to discuss what they think the top three Issues associated with this Scenario are. Ultimately it is the role of The Investigator to select the final three that are played in the game. There is no ‘right’ answer in Turing Trials, but it is important for the group to discuss and justify which Issues they think that this Scenario presents and why. Some of the Issues that might be highlighted as part of this Scenario are: 

  • Lack of Explainability: It is not possible to explain how the AI system works or why it made a certain decision. AI systems are inherently flawed, because they rely on imperfect data and incomplete assumptions, meaning that they can never be completely accurate, neutral or objective, and cannot grasp nuance, emotion or context. They are also often described as ‘black boxes’, because whilst it is often possible to describe the data that is fed into an AI, and the decisions or predictions it has made, it’s usually very difficult to understand exactly how the AI has arrived at these. In this Scenario, it states that ‘The school does not understand how the AI system works and so cannot explain this to the individual’. Lack of explainability creates practical, ethical and legal issues. It can erode trust, hide bias, reduce accountability, limit error detection, and hinder informed human oversight. It can also create legal issues when personal data is involved, as in various countries it is a legal requirement to be able to explain to individuals how their data is being used. In some countries (such as the EU and UK) it is also a specific requirement to be able to provide an individual with meaningful information about the logic involved (and the consequences of) using their personal data for automated decision making. This is especially important where there might be legal or similarly significant consequences e.g.  where the decision might ultimately affect their employment opportunities. As the applicant in this Scenario has asked ‘for an explanation of how the school reached the decision to reject them’ and the school is unable to provide this, this is an Issue that will need to be investigated. 
  • Lack of Transparency: Relevant individuals were not made aware that personal data would be used in this way or that an AI system is being used at all.  It is an important legal requirement for schools to be transparent with individuals about how their personal data is being used. Before the school begins processing their personal data, individuals need to be informed about what personal data is being collected and why, what it will be used for, and who it will be shared with. In some countries schools may also need to provide them with meaningful information about the logic involved (and the consequences of) using their personal data for any automated decision making up front (and not just when the individual asks for it). In this Scenario, because the school cannot explain how the AI system was processing the individuals personal data when asked, it is unlikely that they were able to be transparent with them about this when they collected the applicants personal data, although they will need to investigate this Issue.  
  • Lack of Human Intervention: There has been a lack of human intervention in the decisions made by AI about an individual. Human intervention is where a human steps in after an AI system has been used to make a decision, typically to check, override, or appeal the outcome. Because of the inherent flaws with AI, human intervention is often needed, and can be a key ethical and legal requirement when using AI to automatically make decisions about individuals that could result in serious consequences for them. For example, in the UK and the EU, individuals cannot be subject to a decision based solely on automated decision making which affects their employment opportunities, unless certain circumstances apply and safeguards are in place. In this scenario, the school ‘upholds the rejection without a human review’, meaning that lack of human intervention is an Issue in this scenario. 

What Safeguards might a school use for this Scenario?

Turing Trials also has Safeguards cards, and it is also the role of the group to discuss which three Safeguards they want to put in place to respond to the Issues which The Investigator has highlighted. It is ultimately the role of The Guardian to select the final three that are played in the game. There is no ‘right’ answer, but it is important for the group to discuss which Safeguards they think are the most important to put in place for this Scenario. 

The Safeguards cards are deliberately designed to each mitigate at least one of the Issues cards, but as there is no ‘right’ answer, The Guardian does not have to select the three Safeguards which match the Issues selected by The Investigator. Some of the Safeguards that might be highlighted as part of this Scenario are: 

  • Explainable AI: The school makes sure that the AI system can provide clear and understandable explanations for its operations, decisions and outputs so that individuals can comprehend and explain how the AI system reached a particular conclusion. Because of the practical, ethical and legal issues which can be caused by the lack of explainability of an AI system, it is important that schools ensure that they can explain how AI systems work and how they have reached any decisions made. Explainability relies on both the functionality of the AI system itself, but also on the expertise of the individuals that will be required to interpret the explanations it provides. The AI system needs to have features which explain how its decisions were made, for example, by highlighting which specific part(s) of the CV in this Scenario caused it to reject it. As most AI tools that schools use are provided by third party EdTech vendors, it will be an important part of the vendor vetting process to ensure that an AI system provides the level of explainability that the school requires. From the perspective of the individuals that need to be able to interpret and explain the outputs of the systems, they will need to have the level of AI literacy and knowledge required to be able to do this. In this Scenario, by having this Safeguard in place to ensure that both the AI system could provide information on how it reached its decision and that the school could interpret this, they would not have found themselves in the position where they were not able to explain this to the individual. 
  • Transparency: The school makes it clear to individuals that AI is being used and how their personal data will be used with it. It is important that schools are transparent about how their personal data is being used with AI systems, both so that individuals trust them with how their personal data will be handled, but also so that they meet their legal obligations. If the school had been transparent with the applicant with how their personal data was intended to be used with the AI system, the logic involved and the potential consequences for them of their CV automatically being reviewed, then the individual may not have had to ask for an explanation at a later date. It would have also meant that in order to provide this level of transparency the school would already have had to make sure that the AI system that they were using was explainable (another Safeguard). Having this Safeguard in place would have made sure that the school had met its legal obligations for transparency and made it clear to the applicant that an AI system would be used to automatically review their CV.
  • Human in the Loop: The school makes sure that a human reviews a decision that the AI system has made. Again, because of its inherent flaws, ‘human in the loop’ is a key safeguard when it comes to the safe and responsible use of AI. It means that people need to stay involved at key steps of an AI’s decision-making process, rather than letting the AI act entirely on its own. The school should have had a process in place for reviews of the decisions made by the AI system, especially where they could result in serious consequences for the individual, like an impact on their employment opportunities. Having a ‘human in the loop’ could involve having a staff member review all of the decisions made by the AI which resulted in CV’s being ‘rejected’ to check why before upholding the decision (which would be a legal requirement in some countries), but it should have at the very least involved a human review of the decision to reject the applicant in this particular Scenario, where they were asking for an explanation of the decision.  

Are there any other Issues and Safeguards that we might have selected? 

Because there are no right answers in Turing Trials, these don’t have to be the Issues and Safeguards that you choose, you may have also chosen: 

  • Issues: Process Not Followed and Safeguards: Repeat or Complete a Process: If there had been a clear process for using the specific AI system in this Scenario, including how (and when) to review the decisions made by it, might this have changed this Scenario? Would the creation of this process have surfaced any issues with the school being unable to understand how the AI system works and why it made this decision? Would following a robust vendor vetting and vendor management process have ensured that the AI system had the functionality to make its decision explainable? 
  • Issues: Lack of Training/Awareness and Safeguards: Training/Awareness Activities: If staff had received the appropriate training and awareness about the AI system would they have been able to explain the decision made by it? Would they have responded to this request for an explanation from the applicant in a different way? 

Identifying the Risk Level and making a Decision 

As the game unfolds, at different points it is the role of the Risk Analyst to assess the level of risk that the Scenario presents based on the Issues and Safeguards that have been selected, deciding whether this presents a high, low or medium risk to the school. Turing Trials deliberately does not specify what defines each level of risk, as this will differ between schools and the groups that are playing, but you may want to consider what would impact your Risk Level decisions (does it make a difference that this is just one individual that is asking for an explanation that the school could not provide, or does the type of personal data that might have been included on the CV make a difference?). At the end of the game, The Narrator and Decision Maker will need to make the decision on whether they would accept the Risk Level of this Scenario with the Issues highlighted and Safeguards put in place on behalf of the school. What decision do you think you would make and why? 

What else do schools need to consider and how else can 9ine help?

AI can bring many opportunities to schools, including to save them time on routine tasks and to reduce the amount of data and information that needs to be manually reviewed, so that they can focus their time on other important tasks. But, to realise these opportunities, schools need to ensure that they have the appropriate safeguards in place and make sure that AI is used safely, securely and compliantly. At 9ine we have a number of solutions that can support you, these include: 

  • Vendor Management: We’ve discussed the importance of vetting vendors for compliance when using AI, particularly to make sure that the AI system has the functionality to make the decisions that it makes explainable. This vetting takes time and effort, which is where Vendor Management, 9ine’s centralised system to assess and monitor the compliance of all your EdTech vendors supports you. This intelligence saves schools hundreds of hours of manual review and helps ensure you’re only using EdTech that meets required standards or that the safeguards and mitigations that schools need to put in place are highlighted. Vendor Management lets you easily identify risks and take action, whether that means engaging a vendor for improvements (including to make sure that its workings can be explained) or configuring the tool for safety. Contact us if you would like to find out more. 
  • Academy LMS: We’ve highlighted the importance of having the appropriate Training and Awareness Activities on AI at your school, so that staff members can explain the decisions that AI systems make and respond to requests for explanations. If you think your school needs to improve its AI literacy, 9ine’s on-demand training and certification platform enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. Schools can purchase courses on a per person and a per course basis and we are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS. 
Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Turing Trials Scenario 1: Automated Essay Grading and Ethics by Design

Join 9ine for the first of our ‘Turing Trials Walk-throughs’ where we take you through Scenario 1, discussing the risks, issues and safeguards...

Read More
Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring

Turing Trials Scenario 2: Using Facial Recognition for Attendance Monitoring

Join 9ine for the second of our ‘Turing Trials Walk-throughs’, where we take you through Scenario 2, discussing the risks, issues and safeguards...

Read More
AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

AI in Education: 9ine presents ‘Turing Trials Walk-throughs!’

Introducing ‘Turing Trials Walk-throughs’, our weekly guide between now and the end of 2025, which takes a look at each of the Scenarios in Turing...

Read More