In 2025, schools in England face a convergence of new safeguarding requirements and data protection laws as artificial intelligence (AI) becomes increasingly embedded in education. The latest update to Keeping Children Safe in Education (KCSIE 2025) explicitly addresses generative AI, signaling that AI tools and content now fall under the umbrella of online safety in schools. At the same time, the UK’s new Data (Use and Access) Act 2025 strengthens obligations around children’s data and online services
For school leaders, safeguarding leads, IT directors, and data protection officers, the message is clear: we must update our people, policies, and processes to keep pupils safe and comply with the law in an AI-powered world. This blog post unpacks the implications of these developments and explains how 9ine’s comprehensive platform can help schools meet their statutory and legal obligations.
KCSIE 2025 (statutory guidance for schools in England) has, for the first time, made explicit reference to AI in the context of child safeguarding. In particular, paragraph 143 now directs schools to the Department for Education’s “Generative AI: Product Safety Expectations” – guidance which explains how filtering and monitoring requirements apply to the use of generative AI in education. In effect, KCSIE is telling schools to understand and implement the DfE’s expected safeguards for AI tools, ensuring that any AI-based educational technology (EdTech) used is appropriately filtered, monitored, and integrated into your school’s broader safeguarding systems.
This inclusion is significant. It means that AI-generated content and tools are firmly within scope of online safety duties. Governing bodies and proprietors are expected to review the DfE’s AI product safety guidelines, discuss with IT staff how to apply them, and make sure that insights from AI tools (e.g. flags from AI content filters) are fed into the school’s safeguarding processes for early intervention. Schools should perform risk assessments to determine what filtering and monitoring of AI is necessary (as part of their Prevent duty and existing filtering standards). In practice, this might involve ensuring that any AI chatbots or generative tools pupils use have content moderation enabled, that logs of AI interactions are reviewed for any signs of risk, and that staff remain “the human in the loop” to supervise AI use.
The challenge? Many EdTech products with AI capabilities currently do not meet all the DfE’s Product Safety expectations. There is also no easy way for a school to centrally consolidate all the filtering/monitoring information from multiple AI tools and platforms. However, schools cannot ignore these obligations or ban AI altogether; the benefits of AI (from personalised learning to administrative efficiency) are real, but they must be balanced with robust safeguards. KCSIE stops short of making schools solely responsible for fixing any gaps in vendors’ AI safety features, since other DfE and ICO guidance already requires schools to assess AI risks proactively. In other words, KCSIE assumes schools have begun evaluating the risks of AI in their EdTech and taken steps accordingly. The onus is now on each school to integrate AI risk management into its existing safeguarding framework, bridging any gaps between what AI tools provide and what pupils need for protection.
Digital literacy and AI awareness are also emerging as key themes. KCSIE 2025 notes that children should be taught how to keep themselves and others safe, including online in the context of AI. Before schools can effectively teach pupils about safe and ethical AI use, they must first train their staff. Teachers and support staff need to understand both the general principles of using AI safely and the specific AI tools being adopted at their school. This includes awareness of risks like AI-generated misinformation, biases, or even AI-facilitated grooming attempts, so that educators can confidently guide pupils and remain vigilant. In short, meeting KCSIE’s expectations means building an informed human safeguard around the technology – your staff are that critical human layer.
Hand-in-hand with KCSIE’s AI guidance, the Data (Use and Access) Act 2025 (DUAA) updates UK data protection law (UK GDPR and DPA 2018) with a strong emphasis on protecting children online. For any online service likely to be accessed by children (which includes most educational apps and platforms), the law now requires providers to account for children’s needs and vulnerabilities when handling personal data. This means EdTech vendors, and by extension, the schools using such services, must ensure that their data collection and usage practices consider the best interests of children at different ages. For example, providers should think about how to protect and support child users, recognise that children have less understanding of data risks, and avoid one-size-fits-all approaches in favor of age-appropriate experiences.
The UK’s Age Appropriate Design Code (AADC, also known as the Children’s Code) is explicitly linked to the DUAA through an amendment to Article 25 of the UK GDPR. This amendment requires that “the controller must take into account the children’s higher protection matters” when providing information society services (such as EdTech). These higher protection matters are defined as “how children can best be protected and supported when using the services”.
The ICO, the UK GDPR regulator, has developed a statutory code, the Age Appropriate Design Code, on the basis that children require enhanced protection. Consequently, and this is how the connection broadly works (noting that I am not a lawyer, so some technicalities may be missed): because Article 25 of the GDPR has been updated to reflect the need for higher protection for children, and the regulator has already issued the AADC, the ICO can use the AADC as the benchmark for determining what constitutes “higher protection matters”. As a result, non-compliance with the AADC could amount to an infringement or breach of the GDPR.
The DUAA also introduces or reinforces specific measures around AI and automated decisions. Notably, it requires having a “meaningful human involvement” in important decisions made by AI. Any significant decision about an individual that is made solely by algorithms (for example, an AI system automatically grading exams or flagging at-risk pupils) now comes with stricter rules. Such automated decisions cannot use sensitive personal data (like health, ethnicity, or religion) unless one of a few narrow conditions is met. Even when automated decisions are allowed, schools and EdTech providers must implement safeguards if those decisions have a legal or similarly significant effect on a person’s life. These safeguards include informing individuals that AI was used to make a decision about them, allowing them to challenge or express concerns about that decision, providing an option to have a human review the outcome, and ensuring there’s a way to appeal or contest the decision. In practice, a school deploying AI (say, an AI that analyzes pupil performance to recommend interventions) should be prepared to override or adjust any AI-driven decision if a pupil, parent, or staff member raises an issue – the final say cannot rest with the algorithm alone.
For schools, a crucial implication of the DUAA is the need to have staff empowered as the “humans in the loop.” It’s not enough to have technical safeguards; your teachers, administrators, and safeguarding personnel need the knowledge and confidence to interpret and, if necessary, intervene in AI-driven processes. This means investing in training around AI, data privacy, and cybersecurity so that staff can effectively oversee automated systems and respond to any challenges or objections. The DUAA explicitly ties into this by highlighting the importance of human involvement, a reminder that compliance is not just a technical checkbox, but also an educational effort for those operating the systems.
Alongside new responsibilities, the DUAA does offer some relief in administrative burden for schools. For example, it clarifies that when responding to Subject Access Requests (SARs) – where individuals request all data held on them, schools are only expected to conduct “reasonable and proportionate” searches. If information is stored in inaccessible archives or would require disproportionate effort to retrieve, schools may be justified in narrowing or even refusing the request, which is a practical recognition of the resource constraints many schools face. If you’re dealing with a Subject Access Request (SAR) and want to reduce the time and effort usually required, our consultancy team can help.
The DUAA also expands scenarios in which schools can rely on legitimate interests as the legal basis for processing personal data without consent. It specifically names safeguarding children as a legitimate interest that does not require the usual balancing test between the school’s aims and an individual’s privacy risk. In other words, if you need to process pupil data to protect a child’s welfare or prevent harm, the law explicitly acknowledges that interest – giving schools more confidence to act swiftly in protection of pupils’ safety. (Other helpful examples include allowing data sharing with law enforcement or using data in emergencies without unnecessary delay.)
Bottom line: Between KCSIE 2025 and the DUAA (updating the GDPR), schools are expected to raise their game on both safeguarding and data privacy in the context of AI. You must ensure the EdTech tools in use not only deliver educational value but also have necessary protections for children’s data and wellbeing. This means vetting vendors for compliance (do they adhere to the Age Appropriate Design Code and the DfE’s AI safety standards?), updating policies (e.g. incorporating AI usage into your safeguarding and acceptable use policies), and most importantly, equipping your staff and pupils with the understanding to navigate AI safely. It’s a lot to manage, especially when you likely use dozens of apps and platforms across teaching, learning, and administration. This is where the right technology platform can make a critical difference. Put simply, your school needs the 9ine platform to navigate today’s complex world of compliance. Built with schools, for schools, read on, then book a demo or trial!
Given the breadth of new requirements, schools may wonder how to practically track compliance and implement best practices for online safety, AI oversight, and data protection. A comprehensive solution is needed, one that brings together vendor due diligence, staff training, and clear guidance on using technology in the classroom. 9ine’s platform is designed to do exactly this, helping schools respond with confidence to KCSIE 2025, the Data (Use and Access) Act, the DfE AI Guidance, the DfE Generative AI: Product Safety Expectations and everything else in the GDPR. Key components include:
These three components work in harmony as a unified platform. Together, they cover the critical aspects of governance, operational guidance, and capacity-building that schools need to navigate AI and data protection requirements. As noted in a recent independent review, schools using 9ine’s Vendor Management, Application Library, and Academy LMS are benefitting from an ecosystem built to fill exactly these emerging gaps – from understanding the AI functions embedded in EdTech, to ensuring vendors comply with laws, to delivering AI literacy training for staff.