Hey {{ first_name | human }},
This is an educational heavy edition, so I will keep everything else light. You may have noticed I ran a poll last week. Here are the results…
Would you let some form of AI give you feedback on your teaching?
🟩🟩🟩🟩🟩🟩 Yes (25)
🟨🟨🟨⬜️⬜️⬜️ No (13)
38 Votes
TL;DR: The 60 Second briefing
⚡️Interactive Charts: Both Claude and ChatGPT have had some educational updates to support the in-chat creation of dynamic visual explanations for science and mathematics. It’s a gradual rollout so the feature may not appear on your account yet.
🧪 Evidence Check: A new large-scale RCT (770 students over 5 months) demonstrates that using AI to dynamically adjust problem difficulty based on "meaningful" effort leads to a 0.15 SD increase in unassisted exam scores.
🚨 The 92% Problem: With student AI use hitting record highs, higher ed is pivoting to oral exams and handwritten notes. Are schools are next?
📚 AI+education news
🧪 Effective Personalized AI Tutors via Reinforcement Learning > What it is: A breakthrough 5-month field study of 770 high school students in Taipei. It demonstrates that a "proactive" AI—which uses Reinforcement Learning (RL) to adjust problem difficulty based on a student’s "meaningful" effort—increased unassisted exam scores by 0.15 standard deviations (equivalent to 6–9 months of extra schooling).
Why the study is high quality
Gold-Standard Design (RCT): The study used a Large-Scale Randomised Controlled Trial (RCT), the "gold standard" for determining causality. Students were randomly assigned to either the AI-sequenced group or a fixed-sequence control group, minimising bias.
Ecological Validity: Unlike many AI studies conducted in labs or over a few hours, this was a five-month deployment in real high schools. It shows the system works under real-world constraints (distractions, varying motivation, and curriculum deadlines).
Meaningful Outcome Measure: The final assessment was an in-person, handwritten, paper-based exam completed without any digital devices or AI assistance. This proves students actually internalised the knowledge rather than just learning how to "game" the AI.
⚡️ OpenAI Interactive STEM Visuals > What it is: Dynamic, manipulatable modules for 70+ core concepts (e.g., Hooke’s Law, Pythagorean theorem). Students move sliders to see real-time changes in graphs and data.
Why it matters: This moves AI from "giving the answer" to "visualising the concept." It’s a powerful tool for building mental models rather than just completing worksheets.
Do this next: During your next starter or plenary, use a visual module on the whiteboard. Ask: "What happens to the area if we double the radius?" and let the slider show them.
Why this works: Variation Principle. By keeping the core concept constant while varying the variables visually, students build a flexible, deep understanding of the underlying math.
🚨 The Guardian: Assessment in Crisis > What it is: A deep-dive report showing 92% of students use AI, prompting universities to abandon the "Standard Essay" in favour of oral exams, in-class checks and in-lecture handwritten assignments.
Why it matters: It signals a shift back to "Controlled Assessment" style conditions. The "Product" (the essay) is no longer a reliable proxy for learning.
Do this next: Add a "Transparency Box" to homework. Students must write 2 sentences on how they used AI (or why they didn't).
Why this works: Commitment & Consistency. Requiring an explicit disclosure makes the process visible and increases student accountability for the work submitted.
🎯Prompt:
I find little value in the ‘tech bro’ content that now plagues my Twitter feed. However, every now and then, something useful comes along. A clock is right twice a day and all that… Here are some prompts I saw on a fake thread about an Oxford student who was ‘researching’ for some work. The idea here using AI to support your thinking, not to do the work.
What are the 3 weakest logical jumps in this reasoning? Where would a hostile examiner attack first?
What claims in my argument contradict or oversimplify what these authors actually found?
What would a philosopher of science say is missing from this argument? What assumptions am I making that I haven't defended?‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies. |
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
