THE INAUGURAL GLOBAL SUMMIT ON ARTIFICIAL INTELLIGENCE: BETWEEN THE LINES

12th June 2023 – by Andrew Dolan

The United Kingdom has offered to host the first ever global summit on Artificial Intelligence (AI).  It recognises the great potential of AI in contributing to solving some of mankind’s fundamental problems but also acknowledges the significant risk.

The official statement from the United Kingdom government announcing the initiative was measured but reading between the lines, some key additional factors can be discerned.  The most important of these is the acceptance that the latest developments in AI are potentially very dangerous and in extremis, depending on who one listens to, could become an existential threat.

The recent spectacle of Big Tech companies clamouring for regulatory oversight of AI development stands testimony to these fears, although there seems to be no voluntary move by developers to cease conducting developmental research or creating AI-enabled platforms.  Either this is a problem or not but the messages being transmitted to the public are mixed at best and some real clarity and transparency would help.

The shape of the proposed summit seems clear at this stage.  It will involve the participation of states, Big Tech companies and researchers, as those most influential stakeholders in the current trajectory of AI development.  On balance, this is a sensible decision.  However, would it not be advantageous to also invite responsible and knowledgeable representatives from civic society?

As for the purpose of the summit, the UK statement offers some insight.  It intends to address the need for global AI safety standards and to evaluate and monitor the most significant risks.  I think that here is where some additional insight would have been helpful and that reading between the lines can only take you so far.  

There is a tacit recognition that AI is, or shortly will be, pregnant with concerns about safety.  Given the elasticity of the concept of safety, more explanation would be beneficial.  Is it the safety of those developing AI, the safety of those using AI-enabled technologies or that of people at large, should states or individuals choose to use AI for malicious purposes?  Or is it that as well as the above, the certain use of AI might lead to an existential risk?

I think it is fair to presume that the Summit will seek to clarify some of these points but it requires – as noted in the second major purpose of the Summit – to create a mechanism for the evaluation and prioritisation of the most significant risks.

This again makes perfectly good sense to me but it requires some clarity as to how best to address this task.  The answer might lie in the suggestions that this issue be addressed in a similar fashion to the work of those tasked with combating global pandemics or curbing the spread of nuclear weapons.  The inference must be that the World Health Organisation (WHO) and the International Atomic Energy Agency (IAEA) are suitable templates.

Of course, this raises as many questions as it answers.  Under what authority would such a system or organisation operate?  What would be its mandate?  What does the monitoring and surveillance of risks look like, especially if a state has not signed up to the agreement to adhere to the system or a non-state actor with malicious intent has decided to contravene its core ethos and rules?

Stopping the spread of such AI technology or AI-enabled technologies could be far more difficult than one imagines and examples abound, especially in the realm of nuclear weapons programmes, which demonstrate that no matter how far reaching are the regulations or sanctions, proliferation is always possible.

Additionally, unlike pandemics or ‘Arms Races’, monitoring AI brings with it a strong taste of intellectual property rights and patents which can often be cited as a reason to erect a ‘privacy’ barrier.  Furthermore, how does one establish surety regarding ‘alignment value’ or intent?  Monitoring and evaluation in terms of cases such as those above requires a very different scale and type of activity.  Can such conflicting requirements realistically be situated under one regulatory roof?  

The Summit announcement calls for a solid international framework of cooperation which, while seeking to facilitate safe and reliable AI development, is equally mindful of the negative consequences of some new forms of dual-use technology.  Yet when dealing with so-called ‘frontier systems’, it might be that this UK initiative might simply be the first step in a very different kind of approach to AI safety.  It might be, for example, that housing AI regulation for ‘safe and reliable’ development might in the future sit alongside a quite separate international effort to regulate a new ‘Cold War Arms Race’.  

Undoubtedly, the question of regulation and compliance in the field of AI has just assumed a far greater importance than before.   

print