Skip to the main content.

6 min read

Outlook: AI in Safeguarding – What to Expect in KCSIE 2025

Outlook: AI in Safeguarding – What to Expect in KCSIE 2025
Outlook: AI in Safeguarding – What to Expect in KCSIE 2025
11:54

KCSIE is due to be published soon and, according to sources, is expected to undergo a significant upgrade, potentially even a complete rewrite. In this article, we explore our forecasted changes to KCSIE 2025 in relation to Artificial Intelligence. This forecast is just that, an informed speculation, drawing on recent updates from the Department for Education and Ofsted to guide our analysis.

  • Explicit Reference to AI in Online Safety Sections: KCSIE 2025 may for the first time mention “artificial intelligence” or “generative AI” by name as an aspect of online safety. For example, in the section discussing the “breadth of issues within online safety”, we might see AI-generated content or deepfakes listed alongside existing concerns like fake news, sexual content, cyberbullying, etc. This would formally put AI on the radar for all schools, ensuring no one overlooks it. It would echo DfE’s advice that safeguarding policies should consider AI. A line could be added such as: “Schools should ensure their child protection and online safety policies reflect any use of emerging technologies (for instance, generative AI) by staff or pupils, with appropriate risk assessments and controls.”
  • Updated Guidance on Acceptable Use and Supervision: We expect KCSIE 2025 to incorporate guidance similar to the DfE’s checklist for AI use. It may state that if schools allow students to use AI for learning, this use must be safe and supervised. We might see an extension of the advice on mobile phones and filtering to AI – for instance, a recommendation that “schools should consider age-appropriate restrictions or supervision for the use of AI chatbots or content generators, similar to other online tools”. KCSIE might advise that pupils only use AI in school under staff oversight and using school-approved platforms (paralleling rules many schools have for internet use in general). It could also prompt schools to explicitly teach and enforce that certain uses of AI (like for cheating or malicious acts) are unacceptable, perhaps requiring those to be covered in student behavior policies or IT agreements. Essentially, new text could formalise that using AI falls under the school’s acceptable use policy and misconduct procedures, so misuses can be sanctioned like any other misuse of technology. 9ine’s Vendor Management platform provides schools with a traffic light system for risks and issues of AI, enabling the school to determine safeguarding risks of using AI.
  • Stronger Emphasis on Filtering/Monitoring for AI: Building on the 2023 clarification of filtering duties, KCSIE 2025 may specify that filtering systems should account for AI content. For example, it might instruct that school web filters should block access to AI tools that do not have appropriate safety controls or which violate age restrictions. It might also mention the need to adjust monitoring software to detect if pupils are, say, copying large blocks of AI-generated text or visiting known AI content sites during school time (similar to how they flag certain keywords now).
  • Safeguarding Training to Include AI Awareness: We might see KCSIE advise that DSLs and other staff be trained to understand AI-related risks as part of their regular safeguarding training. For instance, when KCSIE lists what staff training should include (“…including online safety, which among other things includes understanding filtering and monitoring”), it could add “and emerging online challenges such as those posed by artificial intelligence (e.g., deepfakes, AI-generated harmful content).” This would ensure schools incorporate AI scenarios into their safeguarding simulations or discussions. DSLs may be expected to stay abreast of AI trends (perhaps through the DfE’s upcoming training package) and to brief their colleagues accordingly. Given that a significant number of teachers feel they lack knowledge about AI, making this a training point will be important to build confidence and competence in handling issues that arise. 9ine have built a tool, Application Library, specifically for this purpose - a single point of support where teachers can login to understand the types of AI being used in Edtech, but also the safeguarding guidance that should be followed to maintain safety of use. Additionally,  9ine’s Academy-LMS Online AI Training Pathway already includes specialist training on AI for DSL’s. Have free trial access to all 20+ training courses here to determine its suitability for your school.
  • Education and Digital Literacy: KCSIE might strengthen wording on the curriculum side of safeguarding. Currently it says children should be taught about safeguarding, including online safety. In 2025, it could explicitly mention AI as a topic for digital literacy classes or pastoral sessions. For example: “Schools should equip pupils with the knowledge to critically evaluate online content and tools, including awareness of AI-generated content and its potential risks.” Also, KCSIE might encourage discussions with pupils about responsible use of AI and not believing everything an AI produces – in other words, building critical thinking to recognize AI hallucinations or deepfake media as part of safeguarding one’s own mind against manipulation.
  • Data Protection and Children’s Privacy in AI: While KCSIE is not primarily a data protection document, it intersects with privacy when it comes to safeguarding (e.g., the handling of child protection files, information sharing principles are covered). KCSIE 2025 might nod to the importance of protecting children’s data in the context of AI. It could reference that any AI tools used by schools should comply with data protection standards and not put personal information at risk, reinforcing the recent DfE guidance on not feeding personal data into AI without safeguards. This might appear in the section about safeguarding policies or the responsibilities of governors (who oversee GDPR compliance too). By highlighting this, KCSIE would alert schools that letting an AI process sensitive student data could itself be a safeguarding concern if mishandled. Without a tool like 9ine Vendor Management, schools may struggle to understand the privacy and AI risks associated with the EdTech they are currently using, or planning to use
  • New Annex or Examples for AI-Related Scenarios: KCSIE often includes or updates Annex B (“Further information”) with specific issues. We might see new content there about “Online abuse and emerging technologies” or even a dedicated blurb on “Artificial Intelligence.” Such an entry could mention risks like deepfake material (which could be used for child-on-child abuse or by offenders), AI grooming bots (the possibility that perpetrators might use AI to pose as children online, or even that an AI chatbot itself might produce grooming-like interactions), or fraudulent schemes targeting schools/students via AI. It would likely be a cautionary paragraph that while AI can be beneficial, it can also be leveraged for harm, so vigilance is required. The Ofsted inspection guidance already succinctly states: “AI can pose new and unique safeguarding risks”, so translating that into KCSIE language would not be surprising.
  • Governance and Leadership Accountability: KCSIE places ultimate responsibility on school governing bodies/proprietors for safeguarding. In 2025, those bodies might be explicitly tasked with oversight of AI use within their school. For example, KCSIE might instruct governors to ask about how AI is being used and what safeguards are in place. This aligns with Ofsted’s expectation that leaders justify their decisions around AI. So we could see KCSIE suggest that in annual safeguarding reports to governors, the DSL or headteacher includes any updates on new technologies like AI and how the school is managing them. This way, AI governance is built into the school’s accountability structure.
  • Maintaining Human Oversight and Professional Judgement: A theme from  both Ofsted and DfE’s recent guidance is that AI should not replace human professionals or the relationships and judgement they bring. KCSIE 2025 might echo this by cautioning that safeguarding decisions or teaching should not be left solely to automated processes. For instance, if a school uses an AI system to flag potentially at-risk students (through attendance or behavior data), KCSIE could stress that these flags must be reviewed by qualified staff, who then act in accordance with safeguarding protocols. Essentially, any AI alerts or suggestions would just be an aid, not a determiner of action. This principle maintains that technology supplements but never substitutes for human safeguarding responsibility. We saw in Ofsted’s provider guidance an expectation that staff can “overrule AI suggestions – decisions should be made by the user of AI, not the technology.”. Embedding such a principle into KCSIE would reinforce that, for example, a teacher should always verify AI-produced content before using it in class, or a DSL should always investigate an AI-generated report before deciding there’s a real concern.
  • Collaboration with Parents in AI context: Building on existing points about parent engagement, KCSIE 2025 could suggest that schools communicate with parents specifically about AI. Many parents may be less informed about generative AI than their kids or teachers. Schools might be advised to include guidance in parent newsletters about home use of AI (like encouraging use of child-safe AI platforms if any, supervising children’s use of voice assistants or chatbots, and discussing the credibility of information found via AI). By formalising this, KCSIE would prompt schools to proactively bring parents on-board with safe AI practices, creating a consistent message between school and home.

In forecasting these changes, we believe that the aim will be to integrate AI considerations into the existing safeguarding framework rather than to craft an entirely new AI safeguarding framework. The fundamental safeguarding principles remain the same: ensuring children’s well-being, preventing harm, and reacting swiftly and appropriately to concerns. AI is essentially a new context or tool in which those principles must be applied. KCSIE 2025 will likely underscore that schools should neither ban AI outright nor embrace it naively, but rather adopt a balanced approach: leveraging AI’s benefits for education while exercising rigorous oversight and caution to manage its risks. Schools will be expected to evidence that balance.

In this article, I’ve referenced several 9ine products, specifically Vendor Management, Application, and the Academy LMS AI Pathway. These tools are designed to help schools understand not only the AI risks associated with EdTech, but also broader risks related to privacy, cybersecurity, and safeguarding. They already save schools hundreds of hours each year, offering everything from independent assessments of EdTech and template DPIAs to an online digital coach within the Application Library that supports teachers directly. Should KCSIE 2025 adopt the changes we’ve forecast, schools using our tools will have a significant head start in meeting their new responsibilities.

To find out more how we can help prepare your school for KCSIE 2025, get in touch.

9ine presents ‘Turing Trials’

9ine presents ‘Turing Trials’

Looking for a fun, free and engaging way to have discussions about the opportunities and risks of AI in education? Well look no further, as 9ine are...

Read More
AI in Education: Why School Boards Should Care About AI & Governance

AI in Education: Why School Boards Should Care About AI & Governance

The impact of Artificial Intelligence on education has been transformative, but what role does the school board play in governance of it? Ahead of...

Read More
AI in education: Find your AI systems and categorising them for risk

AI in education: Find your AI systems and categorising them for risk

The use of AI is being increasingly regulated (particularly in the education sector), to counter the risks and challenges that the use of AI in...

Read More