Skip to the main content.

5 min read

KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding

KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding
KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding
10:47

When we published our forecast on how KCSIE 2025 might address Artificial Intelligence, we speculated that the Department for Education was poised to make explicit references to AI within the statutory safeguarding framework for the first time. With the draft 2025 guidance now published, how close were we?

This article is a breakdown of what we predicted, what actually made it into KCSIE 2025 and what we are interpreting it actually means! You can join, or watch on demand, this webinar for 10.30am (BST) 14th July 2025 where we discuss in more detail what was published, and how schools can meet those requirements.

For schools outside the UK, Keeping Children Safe in Education (KCSIE) is statutory guidance that must be followed by schools in England, with legal consequences for non-compliance. While not legally binding internationally, its relevance extends beyond the UK due to the influence it holds over how accreditors, inspectors, and regulators in other countries may begin to view AI in the context of safeguarding and child protection. Although not directly applicable, it is noteworthy, particularly as the UK’s approach is likely to be replicated or referenced in other jurisdictions.

Prediction: AI Would Be Explicitly Named in the Context of Online Safety

Result: Accurate

We anticipated that AI and generative AI would be explicitly mentioned in the “online safety” framework. In the published version, paragraph 143 makes this clear:

“The Department has published Generative AI: product safety expectations to support schools to use generative artificial intelligence safely, and explains how filtering and monitoring requirements apply to the use of generative AI in education.”

This inclusion is significant, it signals that AI-generated content and tools now fall clearly within the safeguarding scope for online safety. Specifically, governing bodies and proprietors are expected to review the Department’s Generative AI: product safety expectations, understand what is required in terms of filtering and monitoring within generative AI products, systems and tools, and hold discussions with IT leadership to determine how their school will implement the filtering and monitoring functionality that the Department expects to see in educational technology.

In effect, KCSIE is telling schools: ensure you understand the expectations we have for EdTech products; ensure you can identify and apply the filtering and monitoring capabilities we require; and integrate insights from those tools into your internal safeguarding systems, processes, and procedures to create a safer EdTech ecosystem and enable early intervention.

However, and this is a key point, KCSIE stops short of placing full responsibility on schools to guarantee that EdTech products with AI include all the expected features. This is because other guidance from the Department already outlines the responsibility to understand and assess the risks of AI. KCSIE isn’t duplicating that requirement, but building upon it. It assumes that schools have already assessed the safeguarding risks of their EdTech in terms of filtering and monitoring capability, based on previously issued Departmental guidance, complemented by guidance from the ICO and Ofsted. It then expects schools to act accordingly, using products that meet those standards.

Prediction: Stronger Emphasis on Filtering and Monitoring for AI Tools

Result: Accurate and Confirmed

We predicted that schools would be required to consider how their filtering and monitoring systems manage AI tools. This is now clarified across Paragraphs 140–143, which direct schools to factor AI into their risk assessments. Viewed specifically through the lens of AI, these paragraphs essentially advise schools to:

  • Understand the relationship between generative AI and filtering and monitoring, as defined in the Department’s Generative AI: Product Safety Expectations;
  • Assess the requirement for filtering and monitoring of AI tools in your school, through a risk assessment process—part of which stems from the Prevent duty;
  • Evaluate the need for filtering and monitoring of AI as a consequence of the Department’s Filtering and Monitoring Standards.

Now, to digest the implications of this, allow me a brief moment of product placement. Our Vendor Management platform gives schools a traffic light status for EdTech apps and platforms, offering independent assessments of data protection, cybersecurity, AI, and safeguarding risks, as well as compliance requirements. One of the most effective ways to meet KCSIE’s expectations is to subscribe to our Vendor Management platform. It gives you the intelligence you need to understand not only the risks of AI-related harm, but also known safeguarding, privacy, and cyber risks. Plus, it saves you hundreds of hours of work. Start a trial and see how we break down AI risks, from a clear traffic light summary to the detailed, independent assessments that underpin each rating.

And if Ofsted ever asks how your school understands and manages AI risks? Just show them the Vendor Dashboard within your 9ine platform, it provides complete transparency and clear evidence of accountability. Book a meeting with one of the team to learn more.

Prediction: New Training Requirements for DSLs to Understand AI Risks

Result: Not Yet

We anticipated that safeguarding training would formally include AI awareness for Designated Safeguarding Leads (DSLs) and general staff. While AI is referenced in relation to filtering and monitoring, there is currently no specific requirement for DSLs to be trained in AI-related harms, such as deepfakes or AI-generated grooming. However, the way AI is mentioned suggests that, when determining appropriate training for all staff, as required by KCSIE and the Department’s Filtering and Monitoring Standards, the safeguarding risks posed by AI should be incorporated into that training as standard practice.

When considering how to approach this in your school, you might choose to use 9ine’s Academy LMS – AI Pathway, which includes over twenty courses on integrating AI into education. This pathway also features specialist modules on AI and safeguarding. You can trial all the courses for free to assess whether they’re suitable for your school - it’s literally free training to work out whether our solution closes a safeguarding gap your school may have. To learn more, simply book a meeting with us.

Prediction: Behaviour and Acceptable Use Policies Would Be Required to Include AI

Result: Not Mentioned Directly ❌

We predicted KCSIE would require schools to update acceptable use and behaviour policies to address AI use. While there’s extensive mention of online safety and filtering, there’s no explicit reference to AI in the context of acceptable use, behaviour, or misconduct policies.

This means many schools may not yet be prompted to update student or staff acceptable use policies in response to AI features embedded in EdTech. If your school did have an issue in this area it may be you can fall back on your existing policies, or rely on your privacy policy.

Prediction: New Annex or Dedicated Section on AI-Related Scenarios

Result: Not Included ❌ 

We hoped to see a new section, possibly in Annex B, covering AI-specific safeguarding scenarios such as deepfakes, grooming bots, or AI-generated manipulation. This did not materialise in the current draft. The most notable reference remains the inclusion of AI in filtering/monitoring systems, not in wider safeguarding case types.

Prediction: Digital Literacy and AI Awareness in the Curriculum

Result: Partially Confirmed ✅

KCSIE 2025 (para 128) mentions the upcoming publication of updated RSHE guidance, expected to include AI literacy. While AI isn’t yet woven into the existing safeguarding curriculum requirements, the direction of travel is clear—schools will be expected to equip students to understand and critically assess online information, including AI-generated content.

Note - we plan to adapt our Academy LMS Training to be suitable for pupils. There’s also the free Turing Trials game which is available for download which can be played with pupils - book a meeting on any of our products or services and we will send you the limited edition physical card deck game.

Prediction: Formal Responsibilities for Governing Bodies Around AI Governance

Result: Jury's Out ?✅❌? 

Despite Ofsted’s expectations that school leaders should be able to justify the use of AI, KCSIE 2025 does not impose specific duties on governors or proprietors to directly oversee AI tools in use. However, an inference can be drawn from the statement in Paragraph 143:

“Governing bodies and proprietors should review the [filtering and monitoring] standards and discuss with IT staff and service providers what more needs to be done to support schools and colleges in meeting this standard.”

These filtering and monitoring standards are directly linked to the Generative AI: Product Safety Expectations, which state that understanding these expectations will help schools comply with KCSIE and the Department’s filtering and monitoring requirements. So, through a roundabout way, there is an expectation that the school can demonstrate to governors that all AI risks, and specifically those related to filtering and monitoring, have been assessed and reported on.

What This Means for Schools

While the most pressing infrastructure-related AI risks, such as access to generative tools, have made their way into KCSIE, broader safeguarding, curriculum, governance, and training frameworks are still catching up.

Schools using 9ine’s Application Library, Vendor Management, and Academy LMS AI Pathway already benefit from an ecosystem built for precisely these emerging gaps. Whether it's understanding the AI functions embedded in EdTech, ensuring vendors are compliant with data protection laws, or delivering AI literacy training to DSLs and staff, our tools are designed to help schools stay ahead of the curve.

And as KCSIE inevitably evolves to keep pace with AI’s role in education, we’ll continue to ensure your safeguarding, compliance, and governance frameworks do too. Well, only those schools using the 9ine Platform that is…

Want to learn more?

Book a demo of the 9ine platform or Get in touch with the 9ine team today.

Outlook: AI in Safeguarding – What to Expect in KCSIE 2025

Outlook: AI in Safeguarding – What to Expect in KCSIE 2025

KCSIE is due to be published soon and, according to sources, is expected to undergo a significant upgrade, potentially even a complete rewrite. In...

Read More
9ine presents ‘Turing Trials’

9ine presents ‘Turing Trials’

Looking for a fun, free and engaging way to have discussions about the opportunities and risks of AI in education? Well look no further, as 9ine are...

Read More
AI in Education: AI Literacy. What do schools need to know?

AI in Education: AI Literacy. What do schools need to know?

The EU AI Act’s first provisions came into effect this month, including the requirement for schools to ensure the appropriate level of AI literacy...

Read More