Hey {{ first_name | human }},
This week, we have something genuinely new in the form of interaction models. Instead of the ‘serve and return’ approach, these new models will not hesitate to interrupt when asked.
TL;DR: The 60 Second briefing
⚡ The AI Toupee Fallacy: We notice bad AI work and assume we can spot AI work. InnerDrive argue this is a problem for schools relying too heavily on detection, because good AI-supported writing may leave few obvious fingerprints.
🧪 Interaction Models: Thinking Machines argue that AI is moving beyond prompt-and-response chat. Future models may work through voice, video, cursor movement and live context, making AI feel less like a tool you ask and more like a collaborator alongside you.
🚨 OpenAI Deployment Company: OpenAI has launched a new deployment-focused company. This suggests the AI race is shifting from “who has the best model?” to “who can make AI work safely and usefully inside real organisations?”
📚 AI+education news
🧪 AI Toupee Fallacy > What it is: InnerDrive argue that we often notice bad AI work and then assume we can spot AI work in general. This is the “AI toupee fallacy”: the visible failures distort our judgement. For schools, the risk is overconfidence in detection, especially when pupils edit, improve or partially use AI rather than submit raw outputs.
Do this next: Take one homework or coursework task and ask: does this make pupil thinking visible, or does it mainly reward a polished final product?
⚡️ Maths Diagrams > What it is: Craig Barton argues that AI can now create much better maths diagrams than it could a few months ago. The important shift is not just cleaner drawings, but the ability to generate carefully varied diagrams for geometry and statistics tasks: angles, triangles, bar charts, pie charts and pictograms.
Do this next: Try this first with a narrow maths objective. Ask for 6–10 examples where only one feature varies at a time, then check every diagram and answer before using it with pupils.
🌍 Wider AI updates
⚡️ Gemini Video Model > What it is: Early examples suggest Google is testing a new Gemini video model with chat-based generation and editing. If this becomes widely available, creating short videos, visual explanations and scenarios may become much easier. The education question is whether this improves explanation and modelling, not whether it looks impressive.
🚨Interaction Models > What it is: Thinking Machines argue that AI is moving beyond simple prompt-and-response chat. Newer models may interact through voice, video, cursor movement and live context, making the process of using AI less like ‘serve and return’ and more like an ongoing collaborator that is happy to interrupt. While it is clearly a demo use case in the video below, it is not hard to see how such models could prove useful to stop poor train of thoughts or interrupt incorrect assumptions before completing the thought.
🚨 OpenAI Deployment Company > What it is: OpenAI has launched a deployment-focused company to help organisations put AI into real workflows. This is a useful signal: the AI race is shifting from access to implementation. For schools and other organisations, the hard part is not just choosing a tool, but managing training, data, workload, safeguards and evaluation.
Do this next: Before adopting an AI tool, ask for an implementation plan, not just a feature list.
🎯Prompt:
‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies.
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
