Hey {{ first_name | human }},

I hope you are having a strong start back at work. Here’s what’s new in AI and Education.

📚 AI+education news

  • Craig Barton Super 8s > Long suspected, but Craig’s most recent substack confirmed that his Super 8 resource has created through the a large usage of Generative AI. His prompt to ensure consistency across all topics comes in at 56 pages (🤯). His Super 8s are a great real-world application of how GenAI can be useful and speed up development of resources that could help pupils get better.

  • Thinking Deeply About AI for Schools > Following on from last term’s episode (which you can watch here or listen to here), James and I will be back tomorrow looking back at the predictions that were made for AI in 2025 and what our predictions for AI and schools will be for 2026.

🌍 Wider AI updates

  • Grok > Over the weekend, it feels like a real watershed moment around the lack of safeguards around the protection of women and children was finally crossed. Despite Grok having the capabilities of altering images whilst retaining specified details for some time, many women and others were reporting how photos of themselves or of children were being altered in a sexualised way. Naturally, this has massive safeguarding risks, for both pupils and staff who work in schools as many schools have photographs of the school. Anyone could screenshot an image of a member of your school community and put it through Grok to create an inappropriate, deepfaked image and then share this on social media. It should be said that other GenAI tools are able to alter specific parts of images while retaining some detail. It is unclear if current safeguards built into these tools are robust (and it is certainly not something you want to try out).

    Here is the current legal status of the creation and sharing of these images:

    • If an intimate deepfake of an adult is shared or threatened to be shared, that can be a criminal offence. The core offences sit in the Sexual Offences Act 2003

    • Creating or requesting the creation of an “intimate” deepfake of an adult without consent is now also criminalised under the Data (Use and Access) Act 2025.

    • Any indecent imagery involving under-18s (including synthetic/AI “pseudo-photographs”) is a serious criminal matter. Treat as a safeguarding concern immediately. 

    Schools, if not done so already, should update the following policies immediately:

    • Child Protection / Safeguarding policy: include “digitally manipulated / AI-generated intimate imagery (deepfakes, nudification)”.

    • Behaviour policy: treat creation/sharing/threats as serious harm, not “banter”.

    • Acceptable Use / Mobile phone policy: explicitly prohibit generating or requesting sexualised imagery of others, including “nudify” tools.

    Furthermore, schools may want to consider their approach around the inclusion of photographs of students and staff that are placed on school websites given how easy the generating of deepfakes has become in this AI age.

Till next week.

Mr A 🦾

Reply

or to participate

Keep Reading

No posts found