Hey {{ first_name | human }},
Hope you are having a restful break. I am afraid that there are no chocolate eggs here, just a few ideas worth unwrapping.
TL;DR: The 60 Second briefing
⚡️Microsoft365 & Claude: Every Claude plan now lets you connect it to your Microsoft 365 but there are major concerns if you let it.
🧪AI Shifts Political Attitudes: In controlled experiments, short conversations with large language models, including GPT-4o and GPT-4.5, measurably shifted participants’ self-reported attitudes on political issues.
🚨AI Policy: While the number of teachers who use AI is increasing, about half of that number still do not have an AI policy.
📚 AI+education news
AI Shifts Political Attitudes: > What it is: In three large studies involving 76,977 UK adults, participants had a short back-and-forth conversation with an AI system, then rated how strongly they agreed with political statements. Researchers found that these ratings sometimes shifted after the conversation, suggesting that the AI had changed participants’ self-reported views. The most persuasive systems were not simply the biggest or the most personalised but the ones set up to produce lots of relevant facts and arguments quickly. Worryingly, those same systems also tended to be less accurate.
Why it matters: The problem is not just that pupils can get answers quickly. It is that they may trust a fluent answer because it sounds authoritative. An LLM generates the most plausible next response. Sometimes that is useful. And sometimes it is wrong in ways that are highly convincing. Pupils may not yet be able to spot what is misleading, biased or overconfident. So the challenge is protecting pupils from persuasive, unreliable outputs they may be inclined to believe.
Do this next: Stop talking about LLMs as if they are smarter search engines. They are not. A search engine retrieves; an LLM generates. That distinction matters. One gives you links to inspect. The other gives you language that sounds finished. The safer habit is simple: use AI to draft, explain, or summarise, but verify claims somewhere trustworthy before accepting them as true.
🌍 Wider AI updates
⚡️Microsoft 365 & Claude: > What it is: Claude can now connect directly to Microsoft 365, which means it can search across SharePoint, OneDrive, Outlook, Teams, and calendar data to answer questions using your existing work context. Anthropic says the connector is read-only, uses delegated permissions, and only accesses data a user could already see inside Microsoft 365
Why it matters: Why it matters: This integration could make AI more useful for schools, MATs, and central teams. Instead of uploading policies individually, users can request the latest safeguarding procedure, summarise attendance emails, or combine notes from Teams and SharePoint. This reduces searching, copying and pasting, and increases the likelihood of the model working with the organisation’s actual knowledge.
The privacy trade-off: The privacy trade-off: This requires caution. Claude only sees what you can already see, but the risk doesn’t disappear. Many organisations grant access to confidential emails, sensitive pupil information, safeguarding records, HR documents, complaint files, and more. A tool like this simplifies searching and summarising this information, making it powerful but also risky.
🧪Mind the Gap > What it is: What it is: A new NBER paper compares AI adoption in the U.S. and Europe and finds a clear gap: 43% of U.S. workers reported using AI for their job in early 2026, compared with 32% across the European countries studied. Adoption also varies widely within Europe.
Why it matters: The paper suggests that AI uptake is shaped not just by worker characteristics, but by organisational choices. Firms with better management practices, access to tools, and active encouragement of AI use see higher adoption. Industries with higher adoption have also seen faster productivity growth so far, though the authors are careful not to claim causation. They do not find clear evidence of job losses at industry level.
🎯Prompt/Tip
Claude is great at creating different types of documents from prompts. However, from my experience, you can quickly run out of your allotted tokens, even when on a paid plan. Here are some simple tips to help you make the most of your usage allocation.
Batch your questions into one message. Instead of sending “summarise this”, then “pull out the main points”, then “suggest a title”, ask for all three in one go. That tends to reduce repeated context loading and often improves the output because the model can see the whole task upfront.
Edit the original prompt instead of sending a corrective follow-up. If the response misses the mark, it is tempting to reply with “No, that’s not what I meant”. But that adds another turn to the history and gives the model more to reread next time. Editing the original prompt is usually cleaner and cheaper.
Start a fresh chat before the thread gets bloated. Long conversations become increasingly inefficient because the model keeps rereading old context. A simple rule of thumb is to reset every 15–20 messages. Ask for a summary, paste it into a new chat, and carry on.
None of this is especially glamorous. But that is partly the point. The best token-saving habits are not clever hacks. They are small, repeatable behaviours.
‘Till next week.
Mr A 🦾
Help a colleague save time by sharing this newsletter; distributing these ideas helps a friend get home on time and keeps our energy focused on what matters most: great teaching.
Safety & Privacy Notice
The tools and workflows mentioned are intended for professional productivity and educational enhancement. Users must ensure that any AI implementation remains compliant with their local data protection regulations and institutional safeguarding policies.
Data Privacy: Do not enter personally identifiable information (PII), sensitive student records, or confidential institutional data into public AI models.
Verification Required: AI-generated content can be inaccurate, biased, or out of date. Always maintain a "human-in-the-loop" approach by reviewing and fact-checking all outputs before use.
Professional Judgement: These suggestions do not substitute for formal legal, clinical, or safeguarding advice. Final responsibility for accuracy and appropriateness remains with the professional user.
