As the United Nations discussions on the subject of military Lethal Autonomous Weapon Systems (LAWS) continues apace, we are in danger of ignoring that artificial intelligence is quietly reshaping the face of law enforcement and policing.

That AI had been steadily proving its worth in the law enforcement arena is not news. The use of a police robot, traditionally used for bomb disposal, to kill a rogue gunman in Dallas in July 2016 was much commented on at the time. Some of the commentary focused on the ‘dual-use’ technology aspect of the robot platform, which saw a passive robotic tool traditionally associated with surveillance, verification and dismantling of one form of weapon being transformed into another type of weapon. Others wondered at the legality of using such a device: was the use of the robot proportionate or simply another example of technical ‘overkill’? I would suggest that if this had been under a military conflict setting, many of these questions would not have arisen but given that this was a high-profile public policing action, some disquiet was to be expected. Arguably a more disturbing consequence might be that this development will hasten the deployment of similar or enhanced robots in a wider law enforcement context, although this concern might be countered by those who value the added security such machines would offer to police protection.

Similarly, the use of drone technology is another burgeoning area of AI support to law enforcement. For several years now, drones and UAVs of various types have been utilized by large police forces globally. By and large they are cost effective in policing terms and unobtrusive, gradually displacing some variants of police helicopters, which are more expensive to procure, fuel and fly. Indeed, AI doesn’t stop at the technology of UAVs. The equipment packages – primarily surveillance and communications – carried by such vehicles is often supported by state of the art ‘Big Data’ analytics ranging from facial recognition packages to signal jamming functions associated with counter terrorism. Indeed, the value of drones, especially small drones, has been recognized as not only a law enforcement force multiplier but more worryingly, as a method by which criminal or terrorist groups can conduct their own operations or disrupt police operations, such as the case last year in the United States when criminals used small commercial drones to thwart an FBI surveillance operation.

Arguably, however, the greatest impact has been in the field of AI-fuelled ‘Big Data’ analytics. Traditional law enforcement and policing activities are regularly and routinely supported by such programmes, most of which are based on commercial applications. Sophisticated algorithms can look for activity or information anomalies, provide biometric and facial recognition, identify specific crime hotspots and are pushing towards the ‘Holy Grail’ of policing predictive intelligence, the so-called ‘Minority Report’ capability. Legal process in courts are also being impacted by AI, as some courts are now using software programmes to inform and influence the sentencing procedures.

Yet amongst all this sophisticated and seemingly all-encompassing activity, there is room for some reflection. The tools that enrich our ability to prevent or detect crime based on AI are the same products and methods, which will be turned against society. Governments and commerce are struggling to cope with ever increasing levels of cyber encroachment, ranging from theft to destruction. In fact it is difficult to accurately gauge the extent of the problem, as the business world in particular is shy about acknowledging when it becomes a victim.

Similarly, criminal organizations and terrorist groups are more than capable of developing and deploying AI to support their endeavor, whether this is a drone or an algorithm. The ‘Internet of Things’ will provide an explosion of vulnerable targets as our societies become ever more wired to the net, many of which, will be difficult to protect.

Perhaps the most problematic aspect of our future policing is that to adequately exploit AI, we need ever more and ever greater volumes and forms of data. Indeed, it seems from this vantage point that everything about us must be known to develop a new AI ‘Maginot Line’ in policing and law enforcement. For some, this dystopian future seems unproblematic: the old argument being that if you have done nothing wrong then you have nothing to fear. However, you do not need to be a libertarian to acknowledge the demise of privacy under such a scenario, where everything about you as a person is sourced, analyzed and ultimately controlled by someone else, has many negative implications. It won’t be unreasonable to ask who actually owns your identity?

The difficulty for the law enforcement community is to find a balance between exploiting AI and preventing its manipulation and malicious exploitation as a ‘dual-use’ weapon, whilst protecting the individual’s rights and privacy. How to do this is not simply a problem for the technical or law enforcement communities but it is a problem for all of us. The glib answer is to embed ethics in the design of these new AI tools but there are few examples of how this might work in practice. It might be prudent to start thinking about this now before we are locked into an AI struggle, which could easily become a zero sum game.

 

 

 

 

 

 

print