
Yesterday the PDF reader on my phone offered to turn my National Express coach ticket in to a Podcast. Sometimes I wonder if AI may have gone too far?
More seriously, it is also having a negative impact on CO2 emissions – as of June 2025, Google’s total emissions reached 13.3 million to 15.2 million metric tonnes of CO2 – approximately a 50% increase since 2019, due to higher energy usage in data centers, which are required for training and running AI models.
There are safeguarding risks too, especially to Children and Young People. The NSPCC has published a detailed report and guidance which can be accessed here. It lists the 7 areas of risk:
While the public discourse has often
focused on the threat of AI-generated Child
Sexual Abuse Material (CSAM), the research
identified evidence of a total of seven safety
risks associated with Gen AI. These are:
• sexual grooming
• sexual harassment
• bullying
• sexual extortion
• child sexual abuse/exploitation material
(CSAM/CSEM)
• harmful content
• harmful ads and recommendations”
The Department for Education has recently produced some “product safety standards” guidance around Generative AI usage in schools. It is currently not clear if any current AI models meet these standards, though according to the forum of school leaders I attended today at BETT, AI is not generally considered safe for classroom use.
Finally, it is also, quite often, wrong about some fairly basic stuff. Google reminds us at the bottom of each chat entry…
“Gemini can make mistakes, including about people, so double check it.”
This article from ZDNET explains how the situation is only going to get worse (something known as GIGO – Garbage In, Garbage Out).


