As artificial intelligence (AI) is rapidly integrated into the operations and classrooms of schools across England, leaders face a unique convergence of safeguarding and data protection challenges. The 2025 updates to Keeping Children Safe in Education (KCSIE) explicitly address AI, signalling that generative AI and related technologies now fall within the scope of statutory online safety. At the same time, the new Data (Use and Access) Act (DUAA) has strengthened obligations around the management of children’s data and the use of online services, placing additional responsibilities on schools, their leadership, and data protection officers. The imperative is clear: schools must update people, policies, and processes to meet new standards for AI safety, vendor due diligence, and robust data privacy, ensuring pupil safety and institutional compliance in a fast-evolving landscape.
The regulatory landscape schools in England must navigate has shifted significantly. KCSIE 2025 has, for the first time, made explicit the safeguarding expectations relating to AI in education, referencing the Department for Education’s “Generative AI: Product Safety Expectations.” Schools are expected not only to adopt appropriate filtering and monitoring of AI tools but also to ensure those systems feed into broader safeguarding policies for early intervention. Risk assessments of AI—such as evaluating the use of chatbots or content generators—are now essential, and schools must engage with IT staff and governing bodies to ensure these safeguards are in place.
Simultaneously, the DUAA updates data protection law with a particular emphasis on children’s data security and responsible technology use. This includes direct references to the Age Appropriate Design Code (Children’s Code), higher protection standards for children, and new requirements for human oversight in any significant, automated decision made using AI. Schools must ensure that both EdTech vendors and internal processes account for these heightened protections, from the legal basis for processing to the transparency of AI-driven decisions.
While these regulations set out the expectations, practical challenges abound for schools:
EdTech Vendor Management: The adoption of AI-powered educational tools—often sourced from international vendors—introduces complexity in ensuring compliance with DUAA, AADC, and KCSIE. Not all EdTech products meet the standards outlined by the DfE and ICO, and consolidated oversight of multiple vendors remains a significant difficulty. Schools must thoroughly scrutinise their EdTech landscape, ensuring risk assessments, contractual safeguards, and proactive monitoring are embedded in procurement and ongoing management.
International Data Transfers and Legal Risks: Many EdTech platforms transfer or process student data outside the UK. This cross-border activity increases the risk profile, requiring clear understanding of contractual agreements, data storage locations, and vendor compliance. The DUAA offers some relief in Subject Access Requests (SARs) administration, but places greater emphasis on school responsibility for vendor and third-party oversight.
AI Transparency and Human Oversight: New requirements unequivocally state that meaningful human involvement must be present in decisions that affect pupils' or staff’s rights or opportunities. This mandates that schools cannot sideline trained staff in favour of algorithmic “black box” outcomes; instead, every AI-driven tool must be accompanied by processes for review, intervention, and appeal.
Building a Culture of Awareness: Perhaps most critically, regulations now recognise that safeguarding in the context of AI isn’t achieved by policies alone. Training staff to understand AI risks—such as bias, misinformation, or misuse—is essential. KCSIE places clear expectations on schools to build the digital literacy of both staff and pupils, empowering them to become “the human in the loop” able to challenge, review, and appropriately supervise AI usage.
The cumulative effect of these regulatory changes is clear: compliance and safeguarding are not one-off milestones but ongoing, active processes deeply integrated into the daily practices of schools in England. Schools now have a legal and ethical obligation to:
Systematically assess and monitor the AI features within their EdTech landscape, using robust frameworks that reflect the latest requirements in KCSIE and DUAA.
Actively manage contracts, due diligence, and vendor risk, placing data privacy and child protection at the heart of procurement and review.
Foster a culture of continual professional development where the responsibilities of digital safety, privacy, and AI oversight are shared across the entire school community, not just confined to specialists.
Delivering on these responsibilities requires more than guidance documents and checklists; it demands targeted, practical training for staff at all levels. The AI & Privacy Academy is designed to meet these needs, offering scenario-based learning and strategic insight tailored to the unique context of English schools. By developing the capacity to interpret regulatory changes, conduct real-world risk assessments, and manage vendor and data transfer challenges efficiently, participants move beyond reactive compliance to become active stewards of AI safety and data protection. This professional development is vital to creating a resilient, privacy-first school culture—empowering educators to safely leverage the benefits of AI while meeting their statutory obligations under KCSIE and the DUAA.
Prepare your school for the evolving landscape of AI and data protection. The next intake for the AI & Privacy Academy is now open. Equip your team to lead with confidence, support your pupils, and stay one step ahead of change.