The UK government is considering implementing mandatory safety requirements for cutting-edge AI systems developed in the country. While startups working with AI technology are optimistic about this move, they believe it will benefit their businesses.
Currently, UK AI companies can voluntarily commit to safety standards. However, the government acknowledges that in certain cases, these voluntary commitments may not be sufficient. Therefore, it plans to introduce targeted mandatory requirements for businesses creating highly capable general-purpose AI systems. This is to ensure that AI companies in the UK are held accountable for ensuring the safety of their technologies.
The government’s decision comes after a public consultation on AI regulation. It aims to avoid mirroring the EU’s approach, which recently introduced the world’s first AI Act imposing fines for non-compliance across various AI use cases.
Instead, the UK intends to rely on existing regulators in sectors such as telecommunications, healthcare, and finance to oversee AI deployment and establish relevant regulations. A new steering committee will be launched in the spring to coordinate the efforts of these regulators in overseeing different AI applications.
‘Careful balance’
British AI startups have welcomed the government’s proposed framework, stating that it appears to foster innovation without stifling it.
“We’re glad to see the UK prioritizing regulator capacity, international cooperation, and reducing research barriers, as these are key concerns expressed by startups nationwide,” says Kir Nuthi, head of tech regulation at the Startup Coalition industry association.
Marc Warner, CEO and co-founder of Faculty, expressed relief at the government’s balanced approach, emphasizing the importance of promoting innovation while managing risks. He cautioned against overregulation, particularly in narrow AI applications like assisting doctors in reading mammograms, which could hinder innovation.
Emad Mostaque, CEO of British AI unicorn Stability AI, a company possibly subject to binding requirements, refrained from specific comments on potential regulations. However, he highlighted the significance of the government’s plan to enhance regulator skills in ensuring policies align with the goal of making the UK an AI-friendly environment.
Mostaque also noted that the government’s focus on improving AI access would support grassroots innovation, foster competition, enhance transparency, and bolster safety measures.
Upskilling regulators
To enhance the skills of regulators, the government has allocated £10 million for their training on understanding both the risks and opportunities associated with AI technology.
Darren Jones, Labour’s shadow chief secretary to the Treasury, previously likened the idea of regulators lacking expertise in AI to asking his grandmother to regulate it.
By April 30, two major British regulators, Ofcom and the Competition and Markets Authority, will be required to outline their strategies for addressing AI risks within their respective sectors.
Additionally, the Department for Science, Innovation, and Technology has announced a £90 million investment to establish nine research hubs dedicated to AI safety across the UK. This initiative, part of a partnership with the US formed in November, aims to mitigate potential societal risks arising from AI. These hubs will primarily focus on exploring the application of AI technology in fields such as healthcare, mathematics, and chemistry.