Hey {{ first_name | human }},
It is the Easter hols for many of us (but not everyone, I know). Here’s a bit of an extended briefing on what’s new in the AI Space.
TL;DR: The 60 Second briefing
⚡️Scheming AI: A new report suggests that between October 2025 and March 2026 698 scheming-related AI incidents occurred. A scheming incident involves the AI acting in a way that deliberately goes against a human-provided instruction
🧪AI Conference Rejects AI Papers: Using a technique called ‘Prompt Injection’, organisers of an AI conference were able to tell if reviewers of proposals used AI, something that went against their acceptable use policy. In all, 497 papers were rejected.
🚨AI Librarian Goes Rogue: A secondary school is Manchester, UK used an AI chatbot to curate its library offer by signposting inappropriate books for removal. The school librarian refused and was investigated for concerns around safeguarding.
📚 AI+education news
⚡️ AI in Education Pod > What it is: Craig Barton is back with some great guests and chat around AI in education. He has recently had Daisy Christodoulou, Barbra Oakley and Adam Boxer on to discuss their thoughts around AI. Definitely worth a listen over the Easter break.
🚨AI Librarian Goes Rogue: > What it is: It is claimed that a secondary school in Manchester used an AI Chatbot to audit the books in the school library. Armed with the output, the librarian was instructed to remove the books on the list, refused and resigned her position. The local council upheld the school’s actions which makes it likely that the librarian will not be able to work in schools due to the safeguarding nature of the event.
Why this matters: This goes to show the real world consequences of placing ‘faith’ in the outputs that AI provides (as Matt Goodwin is also finding out). When using AI for any tasks in the workplace, you must keep the human expert in the loop to review outputs and sense-check. This just goes to show the need for strong governance and control over the use of this technology when used in this way.
Do this next: Do not blindly assume that the output is superior to human expertise.
⚡️ IB reveals Chatbot Plans > What it is: I have long predicted that schools will have access to their own AI models that borrow their foundation from one of the major players, but that they will exist on an internal server and trained on the school and Trust level data (think curriculum documents and policies etc). The IB are creating something similar at the curriculum level to allow teachers to converse with its resource centre.
Why this matters: Instead of manually having to find information around the curriculum, you can ask it in natural language and it will surface that content for you. As many will know, hours of time can be spent trying to find the perfect resource that you need. By cutting down that time, it should provide teachers that more time to think deeply about their lessons.
🌍 Wider AI updates
🧪AI Conference Rejects AI Papers: Using a technique called ‘Prompt Injection’, organisers of an AI conference were able to tell if reviewers of proposals used AI, something that went against their acceptable use policy. In all, 497 papers were rejected. Prompt Injection refers to hiding some secret instruction for the AI to follow. The easiest way to do this is to hide it in white text so that when a person copies and pastes it into an LLM they do not see the hidden instruction, but the output still reflects what was in that hidden text.
Do this next: If you suspect pupils are using AI when they should not, I recommend trying this the next time you provide an assignment. It is a far more reliable way to detect the use of AI than a generic AI detector.
🎯Prompt/Tip - Brand Guide
To help get consistent outputs in terms of their look and feel, I have found creating ‘brand guidelines’ invaluable for this as it sets the rules for how a brand should look, sound, and feel. For my Substack, Nuts About Teaching, I created the following and made it into a PDF. I can then upload this into a ‘Project’ and rest easy knowing that visual support and voicing can remain consistent between entries.

You may want to build something like this based on your school or department to help save some time too. Why not give it a try over the Easter break?
‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies.
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
