So after much fanfare, speculation and anticipation, the first ever global AI Safety Summit has now been and gone. Hosted by the UK government at the aptly historic venue of Bletchley Park, the summit was hailed by many in the lead up as being crucial in determining the future of AI.
But what exactly did we learn following two days of discussions on the global future of AI and the potential dangers to national security without coordinated and shared action?
Well, Technology Secretary Michelle Donelan opened the event by highlighting that “AI is already an extraordinary force for good in our society” but that “the risks posed by frontier AI are serious and substantive and it is critical that we work together, both across sectors and countries to recognise these risks.”
Day one also saw the signing of what has been hailed as a landmark declaration of AI’s “catastrophic danger” to humanity by the UK, US, EU and China. Named after the venue hosting the summit, the ‘Bletchley Declaration’ aims to establish a global, shared understanding of both the dangers and opportunities presented by AI. However, early murmurings among the tech startup community in particular have been critical, with voices arguing the declaration is grounded in Silicon Valley and the world of big tech.
Some of the highest profile individuals in the tech industry including Elon Musk and Mustafa Suleyman also spoke on day one, while a recorded video message from King Charles was played to delegates. The King urged the public and private sectors, governments and wider society to come together to tackle the challenges posed by AI, while we were also met with rumours early on day one that the US government was reportedly set to hijack the event with its own AI regulation plan.
Equally key to all of the discussions have been the views of the wider technology industry outside of the likes of X, Meta and DeepMind, with much of the concerns focusing on the need for regulatory clarity when it comes to data ownership in the world of AI.
For example, speaking to Verdict, Dave Colwell, VP of AI and machine learning at Tricentis, commented: “Companies need clarity around acceptable use, data ownership and accountability as a priority. Since it’s not possible to interrogate AI or blame it for any of its decisions, accountability laws will be a big part of future governance.”
Day two meanwhile was bookmarked by the Prime Minister’s involvement, with Rishi Sunak outlining his view on international priorities for addressing the risks of AI and capitalising on opportunities presented by the technology over the next five years. This included holding talks with UN Secretary General Antonio Guterres and European Commission President Ursula von der Leyen.
Also vocal on day two was Italian Prime Minister Georgia Meloni, who revealed that the summit would act as a foundation for next year’s G7 meeting set to be held in Italy, where she said that AI would be one of the top items for discussion.
Arguably the biggest announcement of the second day was the launch of what the UK government has dubbed the world’s first AI Safety Institute to “carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models.”
However, the ‘pièce de résistance’ following the conclusion of the summit itself was billed to be a Q&A between the Prime Minister Rishi Sunak and Elon Musk (who didn’t attend day two – although Number 10 claimed this wasn’t unexpected) live-streamed on X.
Among the discussions on all things AI, the PM and Musk appeared to bond over their enjoyment of the Terminator film series before the former told Musk there would “probably” be a general election in 2024. Of course the million dollar question is, following that will he be back?