Skip to main content

Last week I attended one of The Register’s lectures: AI turning on us? Let’s talk existential risk. The main speaker was Adrian Currie from The Centre for the Study of Existential Risk (CSER) at the University of Cambridge, who spoke about existential risk, challenges of science funding and the perception of AI.

CSER works to analyse the risks to species survival and threats that may cause the destruction of our social structures and society. This includes physically large threats like asteroids and small threats such as viruses. Adrian explained that whilst much of the research was dedicated to unlikely situations, why not dedicate a small bit of research to asking what we do if that 0.0001% event tries to kill us all.

The most relevant part to PR was how sensational narratives around technology can skew research funding and public interest to negatively impact our ability to look at future risk. The example given being that scientists researching AI from the perspective of safety and future risk must deal with the Terminator image every time. Currie argued that this can affect public perception and even where and if research funding is assigned.

There is a broad range of technologies that need to be researched with safety and future safety in mind. Yet, often the narratives in media and the public eye tend towards sensationalism, and are at risk of disproportionally making people focus on interesting but unlikely threats like the Terminator, rather than less flashy but more likely ones such as environmental population displacement.

AI, for example, needs a lot of research into how best to design the ‘boring’ systems that we make to do tasks more efficiently and easily. Otherwise we risk badly programmed AI (and here you see how easy it is to be sensationalist), such as a recent game where an industrial AI tasked with making paperclips with no limits turns the entire universe into paperclips!

Another example is recently developed algorithms that raise the question of ‘can’ vs ‘should’. Facebook, Google and social media platforms have all developed systems for providing content that the user wants, but not always what the user should see. For these companies, serving up content that a user wants to see increases engagement, which pushes the value of the advertising, generating revenue. But this particular process needs in-depth evaluation.

Potentially the government needs to intervene to force the algorithm to be less efficient, and instead provide ‘breaks’ from showing you content you approve of and creating a loop. However, you could see how perception of this research could be controlled by Facebook, which could in turn whip the public into a frenzy about ‘interference with freedom of speech’ and ‘government censorship’. This could then result in a researcher looking into this topic having their funding cut.

For PR professionals, we continue doing our jobs; asking questions about our clients, using the best language to describe their products, and creating narratives for the media. Yet, as part of our narrative creation, we should also take the time to ask what the implications of our narratives could be. As the ones in charge of storytelling, we should take responsibility for the narratives we create.

One of the ways we can do this is by creating narratives that foster discussion of all aspects of our clients’ business; increasing user engagement. Also ensuring we are creating constructive dialogue rather than focusing solely on reactive PR where all the messaging is tightly controlled.

Finbarr Begley

Finbarr is an Account Director at Liberty Communications.

Leave a Reply