Skip to the main content.

5 min read

KCSIE 2026 (Draft) Has Been Published – New AI Safeguarding Obligations Look Likely.

KCSIE 2026 (Draft) Has Been Published – New AI Safeguarding Obligations Look Likely.
KCSIE 2026 (Draft) Has Been Published – New AI Safeguarding Obligations Look Likely.
10:16

KCSIE has been edging towards “technology-first safeguarding” for years, but the KCSIE 2026 draft for consultation (12 February 2026) makes the direction of travel unmistakable: AI is no longer a niche add-on to online safety, it is being written directly into the safeguarding vocabulary, and the expectations for policy and practice.

If KCSIE 2025 was the moment generative AI officially entered the conversation (via the DfE’s Generative AI: Product Safety Expectations and the link to filtering and monitoring), KCSIE 2026 is the moment the guidance starts describing AI-enabled harms as a mainstream safeguarding reality.

Below I’ll break down what’s changed in KCSIE 2026 (draft) in relation to AI and technology, the practical impact on schools, and what you should be doing now to be ready for September 2026.

The biggest shift: AI-generated sexual imagery is now explicitly in scope (deepfakes, “self-generated” content)

KCSIE 2026 (draft) re-frames language around image-based abuse. In the summary of changes, the DfE states it has replaced prior wording (e.g., “indecent”, “nude, semi-nude” and related phrasing) with a clearer framing: “self-generated intimate images and/or videos including those generated using AI e.g. deepfakes”.

This matters because it closes a grey area that many schools have been stuck in:

  • “If it’s AI-generated, is it still a safeguarding issue?”
  • “If it’s ‘fake’, is it harmful?”
  • “If a pupil didn’t take the original photo, does it count as ‘self-generated’?”

KCSIE 2026’s draft language doesn’t dance around it. Deepfakes and AI-generated intimate imagery are treated as part of the online safeguarding landscape, including in the examples used to describe online conduct risks.

Impact on schools

You should assume that, in inspection and in serious incident reviews, schools will increasingly be expected to evidence that they can:

  • recognise AI-enabled sexual harassment/violence behaviours early,
  • respond consistently (including logging, triage, escalation, and referral),
  • support victims effectively,
  • and manage “peer-on-peer” dynamics where AI tools are used to humiliate, coerce, or blackmail.

Online safety’s “4Cs” now directly reference generative AI interactions

KCSIE 2026 keeps the familiar “4Cs” framing (content, contact, conduct, commerce), but the draft now explicitly includes “generative AI applications that simulate [harmful online interaction]” under contact risk.

That one line is deceptively important. It signals that:

  • AI chat experiences (including “companions”, roleplay bots, anonymous chat-style tools, and AI that imitates a person) are being treated as a contact safeguarding risk category — not just “content moderation”.
  • schools need to think beyond blocked keywords and websites, towards interaction risk, grooming dynamics, and manipulation.

Impact on schools

Filtering and monitoring still matter (a lot). But your risk assessment and controls now need to address AI as an interactive environment, not just a website.

That tends to drive three practical changes in schools:

  1. Risk assessments become app-specific and feature-specific (e.g., “Does this tool allow open chat? Does it store conversations? Can students share images? Can it be used anonymously?”).
  2. Monitoring becomes behaviour-led, not only category-led (patterns of use, escalation routes, safeguarding signals).
  3. Staff training moves from “awareness” to “recognition + response”, because AI-related harm often presents as changes in behaviour, social dynamics, coercion, or reputational harm rather than a single “blocked page” event.

KCSIE 2026 adds clearer signposting to “Generative AI in education” guidance and staff training resources

KCSIE 2025 already pointed schools to the DfE’s AI product safety expectations as part of filtering and monitoring considerations.

KCSIE 2026 (draft) keeps that, but it also adds additional AI-specific signposting:

  • Guidance on safety considerations and legal responsibilities if schools choose to use Generative AI (teacher-facing or pupil-facing) is referenced directly.
  • The draft also references DfE-partnered resources to help staff use AI safely and effectively, including a module that covers safeguarding, ethics, data protection, and IP risk.

Impact on schools

This is the DfE effectively saying "if you're using AI, you’re expected to understand the safeguarding and legal implications, and you’re expected to train your staff accordingly.”

This aligns with the direction I highlighted in my KCSIE 2025 analysis: compliance is not just a technical checkbox, it's people, policies, and processes in an AI-powered world.

Filtering, monitoring, and cyber security: more explicit review, assurance, and record-keeping expectations

KCSIE 2025 strengthened expectations for appropriate filtering and monitoring referencing the DfE standards, roles/responsibilities, and annual review.

KCSIE 2026 (draft) reinforces and tightens the operational tone in a few ways:

  • It explicitly expects reviews at least once every academic year, including checks across devices/locations, and states a record should be kept of these checks.
  • It maintains the link to Generative AI: product safety expectations in the filtering/monitoring section.
  • It emphasises cyber security measures aligned to the Cyber security standards for schools and colleges, framing them as part of resilience, breach prevention, and safeguarding risk mitigation.

Impact on schools

Expect increasing scrutiny on whether your school can evidence:

  • named roles and governance for filtering/monitoring,
  • the risk assessment rationale (including Prevent-linked risk assessment),
  • review cadence and documented checks,
  • how alerts are triaged into safeguarding systems (and by whom),
  • and how cyber security controls reduce safeguarding risk (not just IT risk).

In other words: your safeguarding system and your IT controls need to look joined up, not parallel.

Mobile phones: the draft pushes towards “phone-free by default”

This is not “AI” in isolation, but it’s absolutely part of the technology safeguarding landscape. KCSIE 2026 (draft) states that schools should be mobile phone-free environments by default, and anything else should be the exception.

Impact on schools

If your mobile phone approach is informal, inconsistent, or “depends on the year group and the teacher”, it’s a risk. A clear, enforced approach reduces:

  • unmanaged access to AI tools during the day,
  • image creation/sharing,
  • anonymous messaging,
  • harassment dynamics,
  • and “evidence gaps” when incidents occur.

What schools should be doing now to prepare for KCSIE 2026

KCSIE 2026 is still in draft, but the trend is clear enough that a dilution of the draft requirements seems unlikely. Here’s the practical prep plan.

Update your safeguarding risk model for AI-enabled harms

Add explicit scenarios into your safeguarding/online safety risk assessment, including:

  • AI-generated sexual imagery (deepfakes, coercion, extortion),
  • AI “contact” risks (simulated relationships, manipulation, grooming dynamics),
  • misinformation/disinformation impacts on pupils and communities (now explicitly listed in online safety content risks).
Review (and tighten) your policies, not just “online safety”

At minimum, you should be checking alignment across:

  • child protection policy (online safety, filtering/monitoring, escalation),
  • behaviour and anti-bullying (AI-enabled harassment and image-based abuse),
  • acceptable use (staff and pupils),
  • Treat AI tools like safeguarding-relevant systems (because KCSIE now does)
For every AI-enabled tool in your ecosystem (teaching, admin, wellbeing, SEND, analytics), document:
  • whether it is teacher-facing, pupil-facing, or both,
  • key features (chat, image generation, file upload, sharing, memory/logging),
  • safeguarding controls (moderation, age gates, admin controls, reporting),
  • monitoring/logging access (who can review, how often, what triggers escalation),
  • and your mitigations where the vendor controls are weak.

This is exactly the kind of “human in the loop” operational approach schools were already being pushed towards in 2025.

Evidence your filtering/monitoring reviews properly

KCSIE is increasingly signalling that “we do filtering” is not enough. Build a simple evidence pack:

  • annual review date, scope, and outcomes,
  • device/location test checks,
  • changes made (and why),
  • named roles/responsibilities,
  • escalation routes from alerts into safeguarding,
  • and any supplier discussions/actions taken.
Train staff for recognition and response (not just awareness)

KCSIE 2026 (draft) points explicitly to AI guidance and training resources. So your staff programme should include:

  • what AI-enabled safeguarding harm looks like in practice,
  • how to respond (reporting routes, confidentiality, evidence preservation),
  • how to teach safe and ethical use (age-appropriate),
  • and how to manage incidents involving self-generated intimate imagery, including AI-generated imagery.

Where 9ine fits (if you want the simplest operational route)

As I said in the 2025 analysis: schools can’t “ban AI” their way to compliance, the realistic approach is to standardise, evidence, and operationalise.

If you want to make KCSIE 2026 readiness a manageable programme rather than a last-minute policy scramble, the winning pattern is:

Your call to action today should be to book a meeting with one of our team so we can show you how the tools we have developed over many years enable you, in a quick, efficient and cost effective way, to meet the obligations of current, and the near-future KCSIE requirements.

KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding

KCSIE 2025: What We Got Right (and What We Didn’t) About AI and Safeguarding

When we published our forecast on how KCSIE 2025 might address Artificial Intelligence, we speculated that the Department for Education was poised to...

Read More
The UK Data (Use and Access Act): What does it mean for Schools and EdTech Vendors?

The UK Data (Use and Access Act): What does it mean for Schools and EdTech Vendors?

June 2025 saw a change to the data protection landscape in the UK, with the Data (Use and Access) Bill becoming law, to update the UK GDPR and Data...

Read More
Outlook: AI in Safeguarding – What to Expect in KCSIE 2025

Outlook: AI in Safeguarding – What to Expect in KCSIE 2025

KCSIE is due to be published soon and, according to sources, is expected to undergo a significant upgrade, potentially even a complete rewrite. In...

Read More