Once again, a not insignificant collection of ‘Beneficial’ Artificial Intelligence (IA) robotic experts, have given voice to a move to have Lethal Autonomous Weapon Systems (LAWS) banned. Their lobbying at the United Nations (UN) has to date not resulted in any such ban but it has served to focus public attention on one specific issue, which involves AI and indeed, the future of AI development in general.

For many interested parties, the LAWS debate encapsulates or should, the holistic nature of AI development: inescapably, the subject in question cannot be boxed in a technological framework but must reflect a nexus of social, political, security, economic, legal and philosophical influences if the solutions to AI-related problems are to be found.

The continuing LAWS debate, for example, is frequently clouded in obscurity – not that the interested parties are reluctant to speak their mind but that much of the technical detail is concealed and arguably rightly so, if your priority is commercial confidentiality or military secrecy. Yet some transparency is vital if an informed community is to discuss, debate and make sense of the importance of weapon systems that, in the future, might feature an absence of human control. Indeed, one need only scan the popular press and on-line media to be confronted by screaming headlines of apocalyptic doom – which consistently conflates sound AI with AGI – in order to experience where an ill-informed debate might take us.

In a sense, the horse has bolted so to speak, in relation to LAWS. Several such weapon systems of varying sophistication have already been deployed, with many allegedly exploiting AI benefits such as enhanced machine autonomy and neural learning. Naval exercises have been conducted at sea for example where all the platforms are actually remote and AI features large. Where and in what direction should this debate proceed if it is to reflect concerns about man’s exploitation of AI in the military arena?

Perhaps one point of departure could be an analysis of how the science and technology community of the 1930s and 1940s responded to the challenge of nuclear weapons development. Maybe the drivers and motivation of those so-called ‘Wizards of Armageddon’, that gifted community that created the atomic bomb and its more powerful successor, the hydrogen bomb, might offer us some insight into how today’s military industrial complex addresses moral and ethical byproducts of their research and development. If nothing else, such a retrospective study might afford us a common language of risk assessment that the beneficial AI community and the arms manufacturers might borrow and use to foster debate?

In the same way, even a preliminary reflection of the detailed research and analysis on the use of weapons of mass destruction – for example the academic output from the RAND Corporation of the 1950s and 1960s – might provide another framework, if translated into the LAWS security environment, in which to debate the long-term implications of AGI and autonomous weaponry. That many leading defence analysts who thought about thermonuclear war subsequently and occasionally housed doubts about the implications of such weapon systems might encourage debate before any points of no technological return regarding LAWS. Indeed, such horizon scanning studies can only serve to better understand the range of our futures or at least better enable us to shape them.

There is also much to commend debate in this area through an appreciation of how the international community has managed or not to create realistic arms control mechanisms. Admittedly, the results of international cooperation are uneven and those members of the beneficial AI community who demand global bans on certain types of weaponry must factor in such disappointment. Whether it is nuclear weapons, chemicals, biological weapons or that most primitive of LAWS, the landmine, international agreement is seldom enough to bring closure to the issue. The UN is forever sanctioning the North Koreans about its illegal weapons programme but to no avail. How much more difficult will it be to enforce compliance when the ‘proliferator’ is in fact a non-state actor who has no qualms about developing or deploying LAWS? The sad fact is that international counter proliferation structures are rarely in and by themselves sufficient to prevent the spread of illegal strategic weapons. This must be a sobering thought for those proposing a ban on LAWS.

Similarly, from what has emerged from the debate to date, there would appear to be reluctance or perhaps a feeling of irrelevance, when one raises the question of ‘Just War’ and the deployment of LAWS. Key issues such as proportionality or legality regarding the use of force, as expounded by Aquinas, are as relevant today as they were in Medieval Europe, although there are far too many critics who argue that ‘Just War Theory’ is simply a box-ticking exercise. Yet who would deny that proportionate and legally authorized force from a LAW to save lives during a UN Humanitarian operation for example, might not be the ‘right solution’ under certain circumstances. Proponents of LAWS might argue that such machines programmed with super intelligence could easily determine friend and foe and calculate risk in such as way as to deliver an optimal security solution, which minimizes casualties amongst ‘friendly’ forces or innocent civilian bystanders. Of course, if such a programme could ever be available, it might beg the question of why would the machine need human control? Nevertheless, the important point is to have the debate.

Experience might tell us that such a ‘breakout’ of intelligent machines is for the future and such a perspective is not unrealistic. Certainly, the contours of ‘post-heroic’ warfare have yet to be fully mapped but I think we we can realistically anticipate significant future AI developments, which offer lethality certainly but perhaps more importantly, greater situational awareness, speed of calculation and decision-making at economically sustainable costs, not to mention free of emotion and anger. Perhaps warfare is on the cusp of being more humane.

As alluded to earlier, this is not a debate, which should be left to soldiers and politicians – there is simply too much at stake. It is inevitable that dual-use technologies, much of it incubated in Silicon Valley-like enterprises, will find there way into the security realm, including law enforcement environments. However, informed debate about the implications of this force multiplier technology requires holistic comment ranging from robotic specialists to lawyers and from economists to philosophers. It seems like a long time since scientists and philosophers occupied the same room as equals but managing the future of AI requires the realignment of numerous unfamiliar bedfellows and even the surprise appearance of neo-metaphysics?

The debate is more than the implications of machines that can eventually outthink its human creator but the future of humanity and what it means to be a person. The current debate about LAWS is but the tip of the AI iceberg.

print