Hey {{ first_name | human }},

This week, we have more evidence that performance is not learning. New data from the OECD shows why "outsourcing" tasks to AI might be hurting your students' long-term retention.

TL;DR: The 60 Second briefing

🧪 The OECD Performance Trap: New data shows AI-assisted students score 48% higher on practice tasks but 17% lower on unassisted exams. In other words, they perform really well, but do not learn from their experiences of AI.

⚡️ The £23m DfE Tender: The UK government has officially launched a massive tender to co-create safe AI tutoring tools for 450,000 disadvantaged pupils by 2027.

🚨 The "Anti-Dependency" Law: California’s new rules for "Companion Chatbots" are now live, setting the first global legal guardrails against AI that simulates emotional relationships with minors.

📚 AI+education news

🧪 OECD Digital Education Outlook 2026 > What it is: A landmark report showing the results of the a randomised-controlled trial of AI use to support learning. It reveals that students use general AI as a "shortcut" rather than a "partner," resulting in polished completion that hides a lack of actual understanding.

  • Why this matters: It provides an interesting data set that demonstrates what many of knew. That AI use will increase performance, but that those performance gains will not necessarily transfer to long-term learning gains. In fact, in this study of 1,000 pupils in the Turkish secondary education system, those using a regular LLMs scored 48% higher than the no LLM group on practice tasks but scored -17% on exam performance.

  • Do this next: If you are looking at any Edtech solutions involving AI tutors, ask the company if they have data that looks at long-term learning and retention, not just other metrics like practice time or performance.

⚡️ DfE AI Tutoring Trial > What it is: A £23m project launched to provide safe, 1-to-1 AI tutoring for 450,000 disadvantaged secondary pupils.

  • Why this matters: This is the move toward "State-Approved" AI, reducing the procurement "wild west" for schools. It aims to be ready by 2027 with the aim of complementing what is done in class, not replacing teachers.

  • Do this next: If you are an SLT lead, check the EdTech Testbeds criteria this morning. Over 1,000 schools will be recruited to trial this tool.

🌍 Wider AI updates

🚨 California’s Anti-Dependency Act > What it is: Landmark legislation banning AI features that encourage "simulated emotional relationships" for minors.

  • Why it matters: This is the new safeguarding floor. It prevents pupils from treating AI as a confidential emotional confidant.

  • Do this next: Audit your school's "unblocked" list for persona-based chatbots and ensure your AUP explicitly bans "emotional" AI use.

🧪 Anthropic’s AI Fluency for Educators > What it is: A free, CC-licensed course introducing the 4D Framework: Delegation, Description, Discernment, and Diligence.

  • Why it matters: It provides a universal language for AI literacy. Instead of "learning to prompt," staff learn when it is ethically right to "delegate" a task.

  • Do this next: Share the "4D" acronym in your next staff briefing. Ask: "For this task, did you use Discernment (evaluating output) or just Delegation (letting it do the work)?"

 🎯Prompt/Tip:

Suspect AI use in a homework task? Take the Anthropic short-course to understand why this prompt structure can help you to see if they truly understand their work, or are just trying to inflate their own performance.

Role: You are a [YEAR GROUP] Academic Coach. Task: A student has provided a draft response for [TOPIC]. You must NOT improve the draft. Instead, provide 3 "Discernment Challenges": ask questions that force the student to prove they understand the vocabulary and logic used in the draft. Constraint: Questions must be under 15 words. Safety: No student personal data. Use the "Diligence" standard: the student must vouch for the final facts. Tone: Encouraging but rigorous.

What has been your AI win or bust this week?

‘Till next week.

Mr A 🦾

Help a colleague save time & keep up-to-date by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.

Safety & Privacy Notice

The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies.

  • Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.

  • Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.

  • Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.

Reply

Avatar

or to participate

Keep Reading