Sunak’s warnings on frontier AI indicate the direction of future regulation

by Joshua Taggart, Junior Consultant
26/10/2023


This morning, during a rare keynote speech on a particular policy area, Prime Minister Rishi Sunak provided a greater level of detail as to the Government’s philosophy towards AI regulation. He set out his goal to make the UK ‘the world leader in safe AI’ - with particular emphasis on the word ‘safe’.

Next week’s AI Safety Summit at Bletchley Park has three core aims: to develop a shared understanding of definitional concepts in AI; to reach a normative consensus on the goals of AI regulation, putting public safety and international security first; and to create the structural and cultural incentives to invest in AI research for the foreseeable future. On this last point, much progress has been made – AI used to be a fringe topic for the nerdiest among us, and is now almost ubiquitous in discussions about politics, society and our everyday lives.

What were the key highlights of the Prime Minister’s speech?

The discussion paper published this morning, ‘Capabilities and risks from frontier AI’, elaborates on the advisory council’s ‘salient’ priorities for regulating AI to ensure safety for the public. These key areas include AI models which could:

  • Develop nuclear, chemical or biological weapons or pose cybersecurity risks (Sunak was clear that Government cannot and will not outsource national security to private AI developers)
  • Bias, fairness and representational harms, as AI models could reinforce existing societal prejudices
  • Misinformation (or the ‘degradation of the information environment’) and election influencing
  • Labour market disruption, as the fourth industrial revolution changes the way we work
  • Loss of control, perhaps the most well-known risk of AI, the idea that we develop a superintelligence which cannot be contained and could work against human interests

The risks of AI are becoming increasingly well-known: whether image, audio or video deepfakes generated by specific models and used for political purposes, or ‘bot farms’ which are used to generate huge amounts of misinformation or electoral influence; or even the replacement of human labour with AI and its implications for students writing essays for their GCSEs or mass unemployment due to automation. 

What regulation and support is coming to the AI sector?

Sunak noted the Government is ‘not rushing to regulate’ and that this was a point of principle, mirroring the Government’s earlier statement in the AI White Paper that they would take a ‘light touch’ approach to regulation to incentivise innovation first. 

However, his statement this morning was important in that it crystallised the Government’s priorities: safety first, innovation after. He was also clear that AI companies could not ‘mark their own homework’ when their models have larger implications for society (especially national security). This indicates that the Government will have greater desire to intervene in both general LLMs with advanced computing capability, and only narrow models where they could pose specific security threats or societal harms.

Tax, visas and education were also mentioned in the speech as key levers to ensure that the UK continues to lead on AI, providing supply-side incentives to complement the need for the industry’s expansion.  

The Prime Minister also stressed the public-private partnership which is necessary, as AI developers such as Google, DeepMind and OpenAI share details of their models with the UK Government, which is in turn shared with the international community to develop shared understanding and priorities. The invitation of China to next week’s Summit, Sunak insisted, is the right approach to develop international safety standards, despite the potential security risk of China’s development of AI tools.

What does this morning’s announcement mean for the sector? 

Overall, the Prime Minister’s statement is less of a reversal of the AI White Paper – which also noted the ‘challenges’ of AI as well as its potential benefits – and more of an elucidation of the Government’s approach. Research and collaboration between Government and industry will come first, and what is likely to come after is a light-touch approach which prioritises safety and focuses on the biggest players, especially those developing foundation models rather than narrow applications of AI for different sectors.

Firstly, the UK is seeking to become a more attractive place to create and invest in AI businesses, with greater regulatory certainty being established than in other countries which develop less time and resources to understanding AI.  

Secondly, developers of general-application or foundation models (especially those with the resources to be classed as ‘frontier’) should expect the UK Government to get even more involved in their affairs to increase their understanding of AI risks, especially at the ‘tuning’ stage of model development.  

Finally, the Department of Science, Innovation and Technology and the expert council on AI will continue to prioritise research and collaboration with the private sector, providing vitally important opportunities for leading minds and businesses to steer the UK’s regulatory approach.

We’ve cultivated an environment that harbours independence. Whether they are early birds who go to yoga and then smash their news updates before 8.30am, or they simply hate travelling on the tube in rush hour, we trust and respect our team’s skills and conscientiousness. As long as core responsibilities are covered, our team is free to work flexibly.

We’re proud to be a living wage employer. We believe that no one should have to choose between financial stability and doing a job they love, so we pay a wage that allows our team to save for a rainy day and guarantees a good quality of life.

Many members of the Atticus Partners team hold the Communications Management Standard (CMS). CMS demonstrates a commitment to achieving excellence and assures our clients that we are providing the most effective service possible.

Sign up to receive the Atticus Agenda


Sign Up Here