What did we learn from this week’s AI Safety Summit?
Joe Watts-Morgan, Client Executive
Rishi Sunak’s AI Summit has drawn to a close with a one-to-one discussion with Elon Musk, owner of X (formerly known as Twitter) and known sceptic of AI, providing some noteworthy headlines. Topics covered during the conversation included the potential for AI to wipe out jobs, London becoming a hub for AI and the need for a “referee” to safeguard the public from AI. The entire two-day summit gave some notable successes for the Prime Minister, if not also some areas where the summit was found lacking as well.
The summit was in a difficult position even before it began, with the primary concern being who would attend from the guest list. Notable absentees included French President Emmanuel Macron and German Chancellor Olaf Scholz, but Sunak was able to snag some big fish, including US Vice President Kamala Harris, the President of the European Commission Ursula von der Leyen and the aforementioned Musk. Sunak may have been less enthused to see former Leader of the Liberal Democrats and Deputy Prime Minister Nick Clegg at the summit in his capacity as President of Global Affairs for Meta.
Opinions on the future of AI varied dramatically, with King Charles warning that tackling the risks of AI would require international efforts but that the technology held the same potential as the “discovery of electricity.” Musk struck a more pessimistic tone, boldly declaring that AI is “one of the biggest threats to humanity”, that it was “not clear” we could control AI but conceded that they should attempt “to guide it in a direction that’s beneficial to humanity.”
The first day proved fruitful in terms of international agreements reached as the Bletchley Declaration was signed by 28 countries, including the US, China, India and the European Union. As part of this agreement the signatories agreed to work together on AI safety research. There was also a commitment to note the “relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct” and to “intensify and sustain” cooperation in frontier AI.
The second day also proved noteworthy with tech companies, including Meta, Google DeepMind and OpenAI agreeing to allow regulators to test their latest AI products before releasing them to the public. Also announced was international support for an expert body on AI, partly inspired by the already established Intergovernmental Panel on Climate Change.
Attendees were effusive in their praise of the summit, and of the Prime Minister, with French Finance Minister Bruno Le Maire calling it a “key milestone in the definition of fair and effective regulation of artificial intelligence”. Being able to secure future summits in South Korea and France also ensures that the agreements signed at Bletchley will be a positive first step in regulating the risks posed by AI. Additionally, getting buy in from the US, China and the EU on a joint communique concerning AI was a coup for Sunak.
Nevertheless, Bletchley declaration has its flaws. One major issue is that the Bletchley Declaration is a voluntary agreement, as well as failing to provide a framework for how this cooperation would manifest, with little in the way of technicalities. Sunak himself touched on this in his post-summit press conference, saying that “ultimately, binding requirements will likely be necessary.”
This wasn’t the only problem, with the US stealing some of the Prime Minister’s thunder with their own announcements on AI. On the 30th of October, President Biden signed several executive orders that: requiring companies to share safety test results for AI models impacting national security, allowing the government to establish red-team testing guidelines, releasing guidance on watermarking AI-made content to combat deepfakes, and developing standards for biological synthesis screening to prevent AI's role in bioweapon creation. The timing of the US President certainly did not go unnoticed in Westminster.
Vice President Kamala Harris announced the creation of a US AI Safety Institute, as well as revealing that thirty countries had signed a US backed pledge to support the creation of “responsible norms” for use of AI by national militaries, going a touch further than the agreements signed at the summit. Both of these acts will set the future agenda for worldwide AI regulation, ensuring that no matter how much the Prime Minister attempts to guide the way on regulation, the US still leads the way when it comes to AI (much to Sunak’s chagrin).
The summit has proven to be a positive first step when it comes to the regulation of AI, with the agreements signed showing that there is an appetite to cooperate internationally on this subject. That fact that Sunak was able to forge an agreement between the US and China, despite tensions, on the Bletchley Declaration was a big diplomatic success, but the upcoming conferences in South Korea and France will need to reach further, with binding resolutions to ensure this does not go to waste. At a bare minimum, the Prime Minister can revel in the fact that he has managed to get the international ball rolling on AI, and he may be remembered for being a pioneer on this topic.