Ofcom sets out AI strategy to support innovation and protect consumers
Ofcom has unveiled its latest strategy for artificial intelligence, outlining how it supports innovation while safeguarding the public across telecoms, broadcasting, online platforms, and postal services.

The UK’s Office of Communications (Ofcom), on 6 June 2025, published its approach to AI, outlining how it supports safe innovation across the sectors it regulates, from telecoms to broadcasting and postal services. With AI becoming central to operations in these industries, the regulator aims to strike a balance between enabling progress and managing associated risks.
Natalie Black, Ofcom’s Group Director for Networks and Communications, stated that AI is already transforming key services. From automated content moderation on online platforms to AI-enhanced telecommunications networks and real-time captioning in broadcasting, the use of AI is growing rapidly. ‘Our approach is clear,’ Black said. ‘Support growth, safeguard the public, and lead by example in how we use AI ourselves.’
AI is helping telecom companies improve network security and management, enabling broadcasters to translate and dub content more efficiently, and offering online platforms better tools to moderate harmful material. Postal services could use AI to optimise delivery routes, cutting emissions and improving service reliability. Across all these areas, Ofcom maintains a technology-neutral stance, meaning companies are not required to seek prior approval before deploying AI. This allows for faster innovation while keeping regulatory flexibility.
To further support innovation, Ofcom has launched several initiatives. One is SONIC Labs, a collaborative test environment with Digital Catapult, enabling vendors to explore AI use in mobile networks. Ofcom also shares extensive data sets, such as UK spectrum usage data, to help academia and industry train AI systems. The regulator is also working with other bodies, like the CMA, ICO, and FCA, through the Digital Regulation Cooperation Forum to ensure coherence in how AI developments are approached across sectors.
However, Ofcom recognises the risks AI can pose, particularly for online safety. Harmful applications such as deepfakes are on the rise. According to Ofcom’s 2024 research, two in five UK internet users aged 16+ have encountered a deepfake, with a significant portion reporting exposure to sexual deepfakes, some involving minors or people they personally know.
In response, Ofcom is enforcing the Online Safety Act. New rules on ‘safety by design’ require platforms to remove illegal AI-generated content and assess the risks of service changes. These measures aim to make digital spaces safer, especially for children, while not stifling technical advancement.
Internally, Ofcom is also adopting AI to streamline its operations. With over 100 tech experts, including 60 focused on AI, the organisation is running more than a dozen internal trials. These include AI translation tools for handling broadcast complaints, text summarisation systems for public consultation feedback, and AI-driven tools for spectrum planning in dense urban areas. The regulator commits to expanding these uses over the coming year, with a cautious, safety-first approach.
Through its dual role as a regulator and AI practitioner, Ofcom aims to ensure that innovation in AI proceeds responsibly, maximising benefits for the public while managing potential harms.