This week saw the publication of an excellent report entitled ‘The Malicious Use of Artificial Intelligence’ by a consortium of specialist AI researchers and academics.

The study sought to highlight an aspect of AI, which less often catches the limelight but which nevertheless, might pose a future threat to society, namely its exploitation by individuals or groups who harbor malicious intent. The direction of such malicious potential might include activity against digital, physical or political security, claim the authors.

For the security community, much of the flavor of the new study is not new and indeed, has been the subject of much research, debate and speculation at the tactical/operational and strategic level. Nevertheless, the study is right to try and enhance the general level of awareness in society to a myriad of novel security threats and challenges, although both the authors and public would have benefited from some perspective on the prioritization of the risk associated with them.

The report also encouraged key stakeholders to embrace further, more focused, research on a number of critical issues regarding the malicious use of AI. To the casual observer, the range of research fields seemed quite daunting, although the study rightly observed that many of the potential threats are difficult to quantify. Certainly the list of potential AI-related threats is significant but perhaps new research might allow us to filter and posit the more probable threats – something that is quantifiably and possibly qualitatively more manageable – when seeking to identify and predict the emerging risks as well as evolving or developing mitigation measures.

For me, the study prompted some consideration as to where best to position such cooperative, future research.

In the first instance, I would imagine that much more work would have to be done on the ‘intent’ element of the risk assessment. Clearly both State and Non-State Actors have a vested interest in exploiting AI but that fact in itself tells us little. Perspectives on resources for example are crucial to determining resolve and purpose in acquiring AI capabilities, possibly in the face of robust counter measures. Knowing the ‘end-user’ so to speak can also indicate a possible proliferation or acquisition pathway or even a route to fabrication or developmentof an AI-supported ‘weapon’. I believe that there is, for example, an appreciable difference in scale and sophistication between a state-sanctioned programme for acquiring or manufacturing a durable and repeatable AI-based capability, with all that this might entail in terms of resources and skilled manpower or a group that simply wishes to exploit whatever it can acquire – possibly in a terrorist scenario – or even an individual technician who wishes to maliciously exploit personal knowledge for personal reasons.

Unquestionably, some of the answers to the above will clearly influence the nature of the development of a malicious AI capability and the development of society’s response.

I was also struck by the study’s recommendation that much might be learnt from current methods of restricting or controlling dual-use technologies. That the study did note this linkage in relation to future mitigation of malicious AI has to be congratulated but perhaps it might have been prudent to note that success in this area has been elusive.

A careful perusal of the many case studies of global counter proliferation – particularly in relation to nuclear weapons programmes of concern – would suggest that the current regimes in place for export and sensitive trade control have been found wanting at times. An examination of the various United Nations Security Council reports, especially those produced by the International Atomic Energy Agency (IAEA) regarding the procurement of

Illegal, dual-use technology, by Iran and the Democratic People’s Republic of Korea (DPRK), clearly indicates that codes of conduct and licensing are unlikely to be sufficient to change behavior or prevent illegal acquisition of dual-use technologies. Furthermore, some perspective on the activities of international bodies such as the Australia Group, the Wassenaar Arrangement or the Missile Technology Control Regime could have stimulated a more technical assessment of the efficacy of technology control.

Actually, this problem leads on to another consideration and one closely linked to the previous challenge and that is the quandary of how to tackle the inevitable risk of intangible technology transfer. The potential for the transfer of AI research and development through knowledge brokers, individual transfer or simply through targeted analysis of open source technical data is pregnant with export control, financial and intellectual property right challenges. The academic community will be especially vulnerable to illegal information transfers through the theft or sale of sensitive research or patents and even more so, if this sensitive and commercially valuable information is linked to a government-sponsored research programme.

As I mentioned earlier, the issues I have remarked on are not unfamiliar to those who developed the study but clearly a more nuanced assessment could have been made had representatives of a number of these specialist control communities been involved in the research for the study. For example much has been made recently of the emerging threat of the development of AI-supported weaponry such as drones and other Lethal Autonomous Weapon Systems, with the spectre of the creation of atomic weaponry lurking in the background. Yet little coordinated effort – so far as I can see – has been placed on the policy debates surrounding the development, testing and deployment of what was then a novel weapon system. Similarly, should we not also need to treat historical nuclear counter proliferation in the same light as we look for mitigation strategies for today?

However, my final consideration regards the malicious use of AI in the ‘life sciences’ field. Here, I clearly believe that the nexus between malicious and existential is potentially too strong and as such requires a different mindset for assessing risk and exploring potential avenues for mitigation. The authors of the report are very familiar with existential risk but possibly less so in relation to how an AI-generated event linked to either deliberate or pernicious research at the so-called ‘tactical level’, including an accidental event, might eventually spiral out of control and stimulate a cataclysmic effect for which the international community is ill-prepared.

For those of us associated with the Artificial Intelligence Forum Hungary, the study of existential risk has been closely aligned to the philosophy of AI. This new study however, now pushes us to consider risk from a slightly wider perspective and to consider the implications of AI risk at an earlier stage of product or application development. In fact the study compels us to consider risk at all stages of development if we are to better appreciate the dual-use aspect of AI applications. This is a challenge for all of us.


‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation’ by Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Centre for a New American Security, Electronic Frontier Foundation and Open AI, February 2018.

print