Hey {{ first_name | human }},
TL;DR: The 60 Second briefing
⚡️EHCPs & AI: A council is reportedly using AI to move along the backlog of Educational Health Plans
🧪Gallup Report: A new report finds that young people are worried AI may make learning harder as usage seems to have stabilised in young people when compared to last year.
🚨ChatGPT 5.5: OpenAI have released ChatGPT 5.5 describing it as its “smartest and most intuitive” model so far.
📚 AI+education news
⚡️ SynthID > What it is: Google for Education has shared guidance to help educators distinguish between original recordings and AI-generated audio.
Why it matters: Most school AI discussions focus on text: essays, homework, lesson plans, emails. But audio is becoming part of the problem too. For schools, this matters in at least three ways:
Media literacy: pupils need to understand that audio evidence is no longer automatically reliable.
Safeguarding and behaviour: schools may need procedures for handling alleged voice recordings, especially where accusations or reputational harm are involved.
Assessment and oracy: if schools use recorded speech, presentations, or audio submissions, they may need clearer expectations about authenticity.
⚡️EHCPs & AI: What it is: Cambridgeshire County Council is inviting suppliers to create an AI-assisted EHCP solution to help tackle long waits for education, health and care plans. The council says it has more than 8,000 active plans requiring maintenance and annual review, plus around 160 new referrals per month for assessment.
Why it matters:This problem is ripe for AI: high volume, repetitive documentation, stretched teams, and a process with real consequences for children and families.However, EHCPs are not ordinary admin documents. They affect legal entitlement, provision, funding, placement decisions, and a child’s support. The key question is not just whether AI can speed up the process, but whether it can do so without compromising accuracy, professional judgement, family involvement, or legal accountability.
🌍 Wider AI updates
🧪 Gallup Report: Young people are worried AI may make learning harder > What it is: The strongest education finding from this report is that 80% of Gen Z say it is very or somewhat likely that using AI tools will make it more difficult for them to learn in the future.
Why it matters: This is the one I’d foreground. It moves the debate away from “Can pupils use AI to complete work?” and towards “What does AI do to the learning process?” From a teaching perspective, the risk is that pupils may outsource the thinking before they have built the underlying knowledge. AI can be helpful when it explains, checks, quizzes, gives feedback, or provides worked examples. But it can be harmful when it removes the struggle that is necessary for encoding, retrieval, and schema-building.
🚨Open AI Releases ChatGPT 5.5 & Image 2.0 > What it is: OpenAI has released GPT-5.5, describing it as its “smartest and most intuitive” model so far. The main shift is not just better answers, but more capable work completion: planning, using tools, checking outputs, navigating ambiguity, creating documents/spreadsheets, analysing data, researching online, and moving across software to finish a task.
I used ChatGPT 5.5 & Image 2.0 to see how well they would do at creating basic educational diagrams. You can have a look at some of them below.

A fishbone diagram.

A simple timeline
🎯Prompt:
Create an image of [The water cycle designed for Y4] students in England. The diagram is simple and minimalist and adheres to principles around effective diagrams that support the encoding of new information.‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies.
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
