KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding
When we published our forecast on how KCSIE 2025 might address Artificial Intelligence, we speculated that the Department for Education was poised to...
5 min read
9ine
:
February 23, 2026
KCSIE has been edging towards “technology-first safeguarding” for years, but the KCSIE 2026 draft for consultation (12 February 2026) makes the direction of travel unmistakable: AI is no longer a niche add-on to online safety, it is being written directly into the safeguarding vocabulary, and the expectations for policy and practice.
If KCSIE 2025 was the moment generative AI officially entered the conversation (via the DfE’s Generative AI: Product Safety Expectations and the link to filtering and monitoring), KCSIE 2026 is the moment the guidance starts describing AI-enabled harms as a mainstream safeguarding reality.
Below I’ll break down what’s changed in KCSIE 2026 (draft) in relation to AI and technology, the practical impact on schools, and what you should be doing now to be ready for September 2026.
KCSIE 2026 (draft) re-frames language around image-based abuse. In the summary of changes, the DfE states it has replaced prior wording (e.g., “indecent”, “nude, semi-nude” and related phrasing) with a clearer framing: “self-generated intimate images and/or videos including those generated using AI e.g. deepfakes”.
This matters because it closes a grey area that many schools have been stuck in:
KCSIE 2026’s draft language doesn’t dance around it. Deepfakes and AI-generated intimate imagery are treated as part of the online safeguarding landscape, including in the examples used to describe online conduct risks.
You should assume that, in inspection and in serious incident reviews, schools will increasingly be expected to evidence that they can:
KCSIE 2026 keeps the familiar “4Cs” framing (content, contact, conduct, commerce), but the draft now explicitly includes “generative AI applications that simulate [harmful online interaction]” under contact risk.
That one line is deceptively important. It signals that:
Filtering and monitoring still matter (a lot). But your risk assessment and controls now need to address AI as an interactive environment, not just a website.
That tends to drive three practical changes in schools:
KCSIE 2025 already pointed schools to the DfE’s AI product safety expectations as part of filtering and monitoring considerations.
KCSIE 2026 (draft) keeps that, but it also adds additional AI-specific signposting:
This is the DfE effectively saying "if you're using AI, you’re expected to understand the safeguarding and legal implications, and you’re expected to train your staff accordingly.”
This aligns with the direction I highlighted in my KCSIE 2025 analysis: compliance is not just a technical checkbox, it's people, policies, and processes in an AI-powered world.
KCSIE 2025 strengthened expectations for appropriate filtering and monitoring referencing the DfE standards, roles/responsibilities, and annual review.
KCSIE 2026 (draft) reinforces and tightens the operational tone in a few ways:
Expect increasing scrutiny on whether your school can evidence:
In other words: your safeguarding system and your IT controls need to look joined up, not parallel.
This is not “AI” in isolation, but it’s absolutely part of the technology safeguarding landscape. KCSIE 2026 (draft) states that schools should be mobile phone-free environments by default, and anything else should be the exception.
If your mobile phone approach is informal, inconsistent, or “depends on the year group and the teacher”, it’s a risk. A clear, enforced approach reduces:
KCSIE 2026 is still in draft, but the trend is clear enough that a dilution of the draft requirements seems unlikely. Here’s the practical prep plan.
Add explicit scenarios into your safeguarding/online safety risk assessment, including:
At minimum, you should be checking alignment across:
This is exactly the kind of “human in the loop” operational approach schools were already being pushed towards in 2025.
KCSIE is increasingly signalling that “we do filtering” is not enough. Build a simple evidence pack:
KCSIE 2026 (draft) points explicitly to AI guidance and training resources. So your staff programme should include:
As I said in the 2025 analysis: schools can’t “ban AI” their way to compliance, the realistic approach is to standardise, evidence, and operationalise.
If you want to make KCSIE 2026 readiness a manageable programme rather than a last-minute policy scramble, the winning pattern is:
Your call to action today should be to book a meeting with one of our team so we can show you how the tools we have developed over many years enable you, in a quick, efficient and cost effective way, to meet the obligations of current, and the near-future KCSIE requirements.
When we published our forecast on how KCSIE 2025 might address Artificial Intelligence, we speculated that the Department for Education was poised to...
June 2025 saw a change to the data protection landscape in the UK, with the Data (Use and Access) Bill becoming law, to update the UK GDPR and Data...
KCSIE is due to be published soon and, according to sources, is expected to undergo a significant upgrade, potentially even a complete rewrite. In...