Hey {{ first_name | human }},
Welcome back after the Easter/Spring break. Grab a drink and find out what’s new this week in the world of AI.
TL;DR: The 60 Second briefing
⚡️AI not making UK more productive: A new report looking at organisational use of AI in the private sector reports that more people than ever, nearly one fifth, but this is not translating to productivity gains.
🧪AI to judge teacher: A study that looked at different ways to train LLMs, but used the context of judging instructional quality, found that providing additional contextual information about the teachers such as the subject that they taught or the years of experiences widely shifted how they judged a transcript of the effectiveness of their lesson.
🚨Claude Design released: Anthropic have released a new model with improvements in coding and a new feature called Design that allows users to create expert looking mock-ups and make specific edits.
📚 AI+education news
⚡️ Can a LLM judge teacher quality? > What it is: The study asks a simple question: if you give an LLM the same classroom transcript, but add irrelevant background information about the teacher, does the score change? The authors tested seven models and seven types of spurious social context, such as experience, credentials, demographic cues, and sycophantic framing. They found that adding these irrelevant details does change how effective an LLM sees a lesson.
Why this matters: It matters because these irrelevant details should not come into play when you want to judge teacher quality. If, in a hypothetical situation, the same lesson was taught in exactly the same way by two different people and all else was equal apart from the experience they had, you would hope that the reliability of the judgements would be the same.
Why this happens: Taking the example of experience, LLMs seem to be affected by some type of seniority bias where there is an assumption that more experience equates to more expertise. Remember, LLMs are trained on human created data, so all the biases that impact our decision making will be present in LLMs.
🌍 Wider AI updates
⚡️ AI not making UK more productive: > What it is: Accenture’s new UK report argues that the main AI bottleneck is not model capability alone, but whether organisations can redesign work around it. The report draws on surveys of over 450 UK executives and 1,800 employees. Its central claim is that agentic AI matters because systems are moving beyond answering prompts towards interpreting goals, planning multi-step actions, interacting with systems, and executing tasks within set boundaries.
Why it matters: The report is not about schools specifically, but one of its strongest ideas transfers well: AI does not create much value when it is bolted onto old workflows. It creates value when the workflow itself is redesigned to take account of what AI can do.
🚨Claude Design > What it is: Anthropic has launched Claude Design, a new Labs product that lets users create visual work by chatting with Claude. Anthropic says it can generate designs, prototypes, slides, one-pagers, landing pages, social assets, and interactive mockups, then refine them through conversation, inline comments, direct edits, and adjustable controls. It is powered by Claude Opus 4.7 and is available in research preview for paid plans only. I have to admit, it makes for a very impressive demo.
🎯Prompt/Tip: Source-Based Prompting
A source-bound prompt reduces that risk by telling the model:
where it is allowed to draw from
what it must not invent
how cautious it should be when evidence is thin
So instead of asking:
“Turn these notes into a parent letter.”
You ask something more like:
“Draft a parent letter using only the information in these notes. Do not add examples, dates, statistics, or assurances unless they are explicitly stated. If anything is unclear or missing, leave a bracketed note rather than guessing.”‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies. |
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
