9ine Insights | Latest news from 9ine

Trojan AI in Schools: The Hidden Threat Inside Your EdTech Apps

Written by Mark Orchison | Jul 1, 2025 8:05:58 AM

School leaders today face an emerging “Trojan AI” problem in educational technology. In this context, Trojan AI refers to AI-powered features quietly integrated into existing EdTech products already used in schools, often without the school’s awareness or any option to disable them. Much like the legendary Trojan horse, these AI functions slip inside the classroom disguised within trusted apps, bringing unintended risks alongside their promised benefits. This phenomenon is poised to exacerbate the challenges schools already face in managing technology, privacy, and safety in education.

What is “Trojan AI” in EdTech?

Many EdTech vendors are adding new AI capabilities to software that schools have used for years. These aren’t brand new products, but familiar platforms – learning management systems, classroom apps, assessment tools, suddenly augmented with artificial intelligence. What makes these AI rollouts “Trojan” is that schools might not be explicitly notified or consulted before such features appear, nor given a kill-switch to turn them off. An update over summer break might quietly activate a new AI chatbot in a tool that teachers have used daily. From the outside, the app looks the same, but inside, a powerful AI engine is now processing children’s data and generating content. 

The AI Wild West

Few vendors are taking a “compliance-first” approach when rolling out AI features. Many are fast-moving startups or growth-driven companies racing to outpace competitors by launching the latest AI-powered tools, often without pausing to conduct thorough ethical, safety, or compliance audits. It’s a digital gold rush, where innovation outpaces regulation, and schools are left to fend for themselves. A handful of more responsible EdTech firms (for example Flint) are beginning to seek external validation of their AI practices, such as through independent certification programmes like the 9ine Certified Vendor Programme, which assesses vendors against rigorous privacy, cybersecurity, and AI governance standards. Vendors like Flint stand out as rare examples of transparency and accountability in a largely unregulated landscape. However, such cases are the exception, not the rule. For most schools, this means the burden of due diligence falls on them, often after the AI features are already embedded in the tools they use. Schools are navigating a chaotic frontier where the risks of unvetted AI are very real, and the safeguards are still being built.

Why Trojan AI Exacerbates Existing Challenges

Schools were already struggling with EdTech oversight before AI came along. The average school now uses hundreds (in some cases thousands) of different EdTech tools. Before use, each of those tools must be vetted for data privacy, safeguarding, and cybersecurity. Now, Trojan AI is multiplying that burden given EdTech tools that were previously not using AI must now be retrospectively reviewed. When an existing approved app silently gains an AI component, it changes the risk profile of that software overnight. New AI-driven features introduce fresh privacy, safeguarding, and cybersecurity concerns for schools. Many of these risks are extensions of familiar EdTech issues, like protecting personal data or filtering content; but AI operates at a far larger scale and speed, which can amplify problems or create entirely new ones. 

Without a tool like 9ine’s Vendor Management, it becomes almost impossible to identify where AI is being used across your EdTech ecosystem. As a result, conducting appropriate risk analyses and compliance checks is challenging, making it difficult to ensure AI is used in a safe, secure, and compliant way. It also means that staff and teachers may not receive the necessary training, guidance, and support to use AI tools and features as you just don’t know where the AI Trojan’s are.

Retrospective EdTech Vetting and Assessment

One of the most frustrating aspects of Trojan AI is that schools often discover AI features only after they’ve been activated. This forces a reactive approach, requiring retrospective vetting and assessment. In practical terms, this can mean:

Cross-referencing existing policies: For example, if a maths practice app suddenly introduces an AI tutor that provides students with worked solutions, the school’s academic integrity policy may need to be reviewed. Usage guidelines for teachers and students might also require updates to clarify when and how AI tools should, or should not, be used in classwork.

Conducting impact assessments across multiple domains: The introduction of a new AI feature may necessitate a fresh review of the app’s privacy policy, AI safety and governance documentation, cybersecurity practices, and potential child protection risks. Schools are then faced with the difficult task of determining whether these changes breach existing policies on privacy, cybersecurity, or safeguarding. Performing these impact assessments after the fact, and potentially renegotiating terms with the vendor, is significantly harder once the feature is already live. This problem is mitigated somewhat by using the 9ine Vendor Management platform.

Training implications: Staff must also be made aware of any new controls or risks associated with the AI feature, particularly when using the tool with children. 9ine’s Application Platform helps schools overcome these challenges by enabling faster identification, assessment, and communication of changes in EdTech platforms.

This retrospective approach places significant strain on the already limited capacity of school IT and data protection teams. Reviewing the implications of just one AI feature is time-consuming, multiply that by the number of EdTech platforms a school uses, and the task becomes overwhelming. Many schools lack dedicated staff for privacy or cybersecurity, let alone additional resources to assess AI risks. As a result, potentially harmful or non-compliant AI features may go unnoticed simply due to a lack of time, expertise, or capacity to evaluate them in a timely manner.

Top Tips for Managing the Trojan Horse Creep of AI in EdTech

  1. Don’t Assume Familiar Apps Are Safe
    AI features are increasingly being added to apps your school already uses, without notice. Just because an app was approved last year doesn’t mean it’s still compliant today. Reassess regularly, especially after major updates or during school breaks. Schools using 9ine’s Vendor Management platform can submit via our service desk EdTech that need assessment and vetting.
  2. Monitor for AI Feature Releases
    Actively track announcements, release notes, and version changes from your EdTech vendors. Subscribe to update notifications and set alerts for AI-related keywords. Tools like 9ine’s Vendor Management platform can streamline this process and are independently updated when AI capabilities are detected.
  3. Demand Transparency from Vendors
    Ask EdTech suppliers directly: “Does your product include AI functionality?” Require clear answers and documentation. If they cannot provide this, treat it as a red flag. Encourage vendors to pursue independent certification, such as the 9ine Certified Vendor Programme, to demonstrate responsible AI practices.
  4. Update Your Risk Assessment Processes
    Add an “AI Watch” element to your EdTech vetting and re-vetting workflows. When AI is introduced into an app, revisit your privacy impact assessments, safeguarding reviews, and cybersecurity checks. These processes must adapt to evolving functionality, not just new products.
  5. Align AI Use with School Policies
    Review and revise your internal policies, such as academic integrity, acceptable use, and safeguarding, to reflect the realities of AI in classrooms. Use a tool such as 9ine’s Application Platform to ensure staff understand when and how AI can be used, and what boundaries must be respected.
  6. Train Staff Early and Often
    Teachers and support staff need practical guidance on how to work safely with AI-enabled tools. Build this into your digital safeguarding and staff training programmes. If staff aren’t aware AI features exist, they can’t supervise their use or mitigate risks. Check out Academy LMS - AI Pathway for over 20 certified online AI courses from 9ine - free to trial.
  7. Prioritise High-Risk Tools
    Triage your EdTech ecosystem by usage and risk. Apps heavily used in the classroom or involving younger children should be prioritised for AI review. Use 9ine’s Application Library and risk indicators to identify which platforms need urgent attention.
  8. Build a Register of AI Usage
    Maintain a central record of which EdTech tools use AI, what kind of AI is involved (e.g. generative, adaptive, predictive), and where that data is processed. 9ine’s Vendor Management syncs directly into 9ine’s Privacy platform creating end-to-end, vendor to records of processing, complete privacy management.
  9. Embed AI Oversight into Governance
    Include AI-specific risks in your technology governance structure. Ensure that governors, senior leaders, and safeguarding leads understand the implications of AI features and have oversight of risk assessments, controls, and vendor engagement.
  10. Use External Tools to Regain Control
    Trying to manually track AI across hundreds of apps is unsustainable. Use platforms like 9ine’s Vendor Management and Application Library to automate discovery, streamline assessments, and receive expert-reviewed insights into your EdTech’s privacy, cyber, and AI risks.

In conclusion, “Trojan AI” doesn’t have to catch your school by surprise. With oversight, updated processes, and perhaps a bit of outside help, you can turn this hypothetical threat into a manageable aspect of your EdTech strategy. The key is to remain informed, proactive, and prepared. By doing so, schools can confidently harness the positive power of AI-enhanced learning tools – without the nasty surprises.