5 August 2021By Andrew Dolan

Its that time of the year again when the international community gets together to discuss the potential impact of lethal autonomous weapon systems (LAWS).

Predictably, a number of representatives of the designated national governmental expert groups have advocated a ban of sorts against such weapons.  This position is routinely supported by the customary NGO community, which abhors the idea of machines being able to make life or death decisions absent human operational control.  

Although one might argue that such weaponry already exists, there is underway, a concerted effort to use opprobrium to push for limitations on such weapon systems.  By and large, those states most concerned by such trends in weapon development are those least capable of either developing their own or being able to procure them from others, even if they wished to, which seems unlikely.

 As in previous years, such arguments, although not without ethical relevance, seem unable to surmount the obstacles raised by these states, which fail to share their opponent’s concern or dystopian vision.  President Putin once remarked that the leader in this field will become ‘the ruler of the world’ and countries such as the USA, UK and France seem to share his view. This comment was more than a nod to those states and technical experts who see in LAWS a high degree of operational efficiency: deploying such weapons cuts down on the numbers of soldiers required in combat, it is more cost effective, they have greater accuracy and their use should lead to a significant reduction in collateral damage.  Such weaponry seems more supportive of ‘proportionality’ in terms of violence and as such, should be welcomed, or so the argument goes.

Based on these opposing values, is some form of compromise possible?  Do both arguments not contain important ethical factors when it comes to respecting human dignity or moderation in modern conflict?

If one examines the history of nuclear weapons policy, then compromise and restraint in relation to novel or non-traditional weapon systems is possible.  These compromises are not inconsequential if you just look at the reduction in the scale of nuclear weapon stockpiles, the elimination of specific types of weapon, such as the neutron bomb or the agreement to introduce robust verification systems.

Critics will and often do argue, however, that such weapons, although reduced in number, have retained their lethality through adaptation, modification and enhancement and much of this is attributed to developments in artificial intelligence. 

The common appreciation of AI sees it as being consistent with good and benign purposes but this perspective completely ignores the wider application of AI in the security sphere.  

In essence, however, arguments for the banning of LAWS have probably missed the boat.  Operational efficiency is a powerful driver in the military industrial complex from where such lethal technologies emerge.  Rarely can genies be put back in bottles without a struggle and this case will be no exception.  Critics of LAWS will point to technical miscalculation or rogue applications in weapon systems to question the efficacy and reliability of policies, which seek to use such weapons.  They will also rightly highlight that AI developments in another form of weaponry – most notably cyber – might complicate the smooth operational efficiency of LAWS deployments.  Nevertheless, is this likely and timely enough to really force some form of LAWS ban?

Arguably, the critics of LAWS might even be better served by widening their concerns to look beyond traditional military systems.  Two key areas come to mind.

Firstly, there is the militarization of police forces and the uses of LAWS to support policing and law and order operations.  The increased use of drones, surveillance systems, especially those including various recognition or tracking systems and adaptations of machines to support light weapon engagements are all worthy of review.  Should we not be concerned how AI algorithms are contributing to this form of challenge to human dignity?

Secondly and perhaps more acute, is the need to encourage the suppression of another class of LAW, the biological agent.  The recent global pandemic has underlined the risks and challenges associated with trying to protect communities against the effects of biological agents.  Should a nexus between criminality and knowledge in life sciences become acute, AI will undoubtedly play a key role in facilitating the development of new forms of exotic weapon.  Fears of such a risk might encourage critics of LAWS to consider lobbying governments to strengthen verification arrangements in bodies such as the Biological Warfare Convention (BWC) and to look again at the role played by AI.

Critics of LAWS are far from misguided but if they want to influence the development, they might be too late.  Maybe restrictions or ceilings on weapon types is possible but equally, the reinforcing of their interest in a wider class of LAW in non-traditional settings could offer up a deeper ethical debate to a wider audience.


*  Readers are encouraged to view the sessions of the 2021 Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapon systems.


print