by Richard B. HOSKINS
I have to confess that I feel somewhat unqualified to speak here today. Not so much due to lack of familiarity with the topic, but rather because I must admit to some degree of hypocrisy or bias. You see, there have been occasions in my career when the future in the form of new technologies announced itself, and I refused to answer the call hoping it would just leave me alone. At times, I sounded like my grandfather insisting the old way of doing things was better. Of course, as a preteen, I advocated for a phone in our extremely rural farmhouse; but because phones involved party (or shared service) lines back in the late 50s and early 60s, my grandfather did not trust them, insisting the revenuers (his word for all government enforcement) were listening in. I claimed that was nonsense because, well, I really wanted a phone. Of course, as I prepared to take part in a court-authorized wire intercept in my role as one of those “revenuers 25 years later I had a moment when I thought, “What would granddad say?”
It has been my experience that there is little chance of halting the advance of new technologies. They will inevitably be incorporated into our society, and our institutions have to learn to accommodate them. We in law enforcement have had to adapt to many changes over the past few decades such as the use of digital imaging instead of film and electronic signatures. Both of which were met with concerns over their vulnerability to manipulation, but both eventually accepted as standard. The fact that our concerns over electronic signatures and digital images were addressed, however, gave me a degree of satisfaction and inspired me to considerthat there are possible pathways for proper integration of new technologies.
I feel it is important to acknowledge the opinions of those who have decades of experience when considering the impact of integrating new technologies. The concerns posed by those who are being dragged into the future can frame our policies to ensure proper integration with the inclusion of safeguards. Their insistence on respecting the old ways can actually help identify possible areas in need of improvement with new technologies. Their experience can be of great value and should be treated as an asset. Otherwise, how would we make money as consultants upon retirement?
So once again, we are confronted with the rapid approach of a new science—thinking machines. Well, maybe “rapid” is not the proper term. The concept of thinking machines has been an inspiration for horror and sci-fi stories across multiple media for over half a century to include Colossus from the Forbin Project, HAL 9000 from 2001 A Space Odyssey, Skynet from the Terminator movies, or Ultron from Marvel’s Avengers movie. And just in case there are fellow trekkies out there, who can forget Captain Kirk’s conflict with the M5 Multitronic AI, which was intended to replace entire Start Ship crews but failed to distinguish between war gaming and an actual attack.
Even though the idea is not new, and the progress has at times seemed somewhat stalled, we are now making steady and remarkable progress toward the use of AI and autonomous robots in many areas including law enforcement. It would be difficult to halt this progress, given the potential benefits and the ongoing need to improve our ability to deal with crime.
It is not just the obvious detrimental impacts of crime that drive this need to improve prevention and response. We tend to think of crime in terms of robbery, rape, murder, drugs, etc., but there are myriad insidious ways this infection compromises our society’s well-being. The University of Pennsylvania recently reviewed the current research on the economic impact of crime and most analysis puts the cost at approximately 2 percent of gross domestic productin the United States where we spend an estimated $100 billion a yearon law enforcement and prisons.1These are only the direct costs and they do not account for the treatment of victims or other less obvious impacts. For example, the unemployment numbers in the United States are a common topic of choice by politicians, as it is a prime economic indicator. Consider then that having a criminal record can significantly reduce an individual’s long-term employment prospects.
In addition to the economic impact of crime, there is also the ongoing problem of identifying adequate resources to effectively address it. When I was assigned to FBIHQ, I worked in a section that addressed the important issue of missing and exploited children and last year I spoke at a conference on a related topic here in Hungary. While researching for that talk, I learnedthat in 2017the FBI reported more than 465,000 cases of missing children in the United States. Nonprofit organizations assist by sending tips to law enforcement, which ultimately ends up with FBI analysts. Last year that team of 25 analysts received over 8 million tips.2Thankfully, efforts are underway to apply AI to the deluge of information in these tips. Who could argue with the benefits of this use of AI?
As you can see, there are many ways crime impacts society. No wonder then, that the effort to improve management of crime has many supporters, and the potential benefits of AI applications are receiving much attention. Today as we speak, there are efforts underway in many countries including the United States to examine possible uses of AI in areas of law enforcement such as crime prevention, searching for missing persons or fugitives, and even predicting crime. I will just take a moment to identify a few of the most frequently discussed AI law enforcement applications beginning with crime detection.
Named for the company that developed it, ShotSpotter uses existing city infrastructure to triangulate the origin of gunfire. Having served on a multi-agency gang task force while in the FBI, I can attest from personal experience that it is possible for instances of gunfire to go unreported. Sadly, in some communities it becomes familiar background noise, and in some cases, when it echoes in the distance, it is ignored by people all too used to it. ShotSpotter is testing a system that uses a network of sensors to detect gunshots and provide police with precise information on the type of weapon and the location of the shots. Before the system is activated, special cameras, lamps, and acoustic sensors are placed in target areas of the city. The data collected by the network is displayed on maps. Algorithms compare sensor data on noise levels, echoes bouncing off nearby structures, and matching sound signatures in the database to give police real-time information, thus potentially increasing their response time exponentially. The company reports instances where officers responding to the AI-driven data saved a victim’s life by arriving at the scene of a shooting event though no one had called in the emergency.3
Just as ShotSpotter applies AI to acoustic analysis, other systems analyze visual input. These systems such as Cortica use AI capable of analyzing real-time footage to scan for possible criminal activity. It can collect and study facial images and analyze behavior patterns. Combined with X-ray, Cortica can detect shapes, sizes, and dimensions such as those of various weapons.4Other countries have taken this concept even further with AI, which combines facial recognition with behavior pattern analysis to predict criminal intent.The system would key in on suspicious activity such as pacing in front of a particular building or walking back and forth along a particular block. The AI would begin to track this individual consistently, based on the possibility that they might be casing a target location or waiting for an intended victim.5Does this sound to anyone else as though we are approaching the world of the movie Minority Report?
There are other entities, some in the United States, that are using AI for predicting crime. One such system named Predpol analyzes big data on past crime to predict future crime. I can attest to the soundness of the core premise that certain crimes tend to occur in specific areas over a specific range of time. I always compared this to predators claiming a particular territory within which to hunt. Predpol’s algorithm applies this principle to historical crime data and highlights potential crime scenes on maps. The police can then use this information to determine patrol priorities.6
Clearly, we have established that the genie is already out of the bottle. The technology exists, is being applied to law enforcement, and few can argue the endless potential for improving law enforcement across a myriad of functions. But are we doing enough to consider the implications, of which there are many?
My concerns over the philosophical and ethical aspects of a future with Autonomous Robots as cops probably began in 1998. I was conducting an interview in a hotel in Fort Lauderdale, Florida, when two gunshots rang out. Initially, we paused to query one another if it might have been something else mimicking the sound of gunshots. Upon scanning the area, we did not see anything obvious so we returned to our interview. Several minutes later, the phone rang and we were notified there was an active shooter in a room on a lower floor. We exited, crawling on the floor to a more secure location to allow the police SWAT team take position. They used a robot to enter the shooter’s room after it had become ominously quiet. Once the robot confirmed the shooter was either unconscious or dead, the SWAT team entered. I was understandably impressed and humbled. The new ways were proving to be consistently more efficient and safer, but I immediately concluded that there are limits to the applications of these futuristic tools.That was 20 years ago, and we are moving closer and closer to Autonomous Robocops guided by AI, and it seems others do not share my belief that there should be limits placed on these applications.
Still I cannot find it within myself to reconcile the concept of a machine making life-or-death decisions. I cannot accept the idea of addressing crisis matters such as this without a capacity to detect and weigh intangibles such as remorse, acting in the heat of the moment, or distress versus malicious intent. Experienced first responders can tell you that in many cases there is a look in an unbalanced, distressed, and dangerous person’s eye that clearly confirms the potential to do harm to themselves or another, but also communicates to experienced empathetic responders that this person might respond to crisis mediation. There have been two occasions in my career where I found myself facing armed subjects and, based on my training, I would have been justified in shooting and killing them both. But based on my instincts and what was communicated between us verbally and nonverbally, I choose to give both of them the opportunity to surrender, and in both cases they did. In 1998, the robot was only used to give SWAT an operational advantage for entry; it was not able to act on its own, but will that remain the case as the technology continues to evolve? Dubai has already launched its Robocop prototype; although I am not certain it is truly autonomous, it certainly tempts our imagining the next step in the evolution of AI-guided robot independence.
Let us also consider other areas related to law enforcement subject to AI application such as the judicial process. What if the strict adherences to sentencing guidelines lead to an eventual conclusion that some cases need no judge at all? Imagine if the defendant could opt for a computerized hearing where data containing facts of the case is loaded to AI, and a sentence is handed down based on the facts. One could argue that this might be an improvement. This could all but eliminate the sentencing discrepancies that reflect the varying temperaments of different judges. Any argument that some defendants might receive a lighter sentence due to race, ethnicity, perceived value to the community, etc., would no longer have sway.
Consider this: In a recent U.S. case indicating the potential for social economic bias, a young white male named Ethan Crouch was tried for killing four people while driving under the influence of alcohol and drugs. His defense attorney brought in a psychologist who testified that growing up with money and privilege might have left Ethan with psychological afflictions. Specifically, his wealth and privilege robbed him of the ability to distinguish right from wrong. The press coined the term “Afluenza.” Surely, these types of outrageous arguments would have little sway over the cold logical assessment of an AI judiciary. But even if that were the case, is it really preferable? We must consider that AI can only be expected to reflect its programing. So if the same human bias that might accept “Afluenza” as a credible defense is programed into the AI, would not the results be the same?
These concerns are not without merit. AI is already being utilized by some courts, and already there are claims of bias. One of these predictive algorithms being pioneered in the United States to assess risk of a released person committing a crime is named COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. Independent studies have raised concerns that COMPAS may suffer from racially biased data in its programming. The studies found that COMPAS was twice as likely to misclassify black defendants who did not recidivate over a two-year period as higher risk for recidivism than white defendants with the same non-recidivist record (45 percent vs. 23 percent).7These findings were disputed, but even the remote possibility that decisions to continue incarceration or release might be influenced by a racially biased AI program is enough for concern.
These concerns are certainly valid and applicable to all areas of law enforcement where AI is used. I am all for the continued development of AI and autonomous robots for law enforcement, but I do not feel a computer program should be trusted to act totally independent of human control where human lives or civil liberties are at stake. I do not say this lightly. I am a career law enforcement officer. However, as an African American, I am extremely concerned with the number of unwarranted deaths of black men possibly at the hands of overzealous, panicked, or poorly trained police. I have two grandsons who are the apple of my eye and soon they will be teens, requesting to be dropped off at the mall to hang out with friends, and later begging their parents for permission to use the family car. I fear for their safety and cannot help wondering if AI-driven response to some of the past shootings would have been different.
In August, a U.S. police officer was indicted after shooting into a fleeing car and killing a teenage passenger who had no involvement in the incident to which the police had responded. It is not hard to imagine a robot response triggered by acoustic data from AI such as ShotSpotter, followed by confirmation that the teen is not the suspect using facial recognition such as Cortica. The robot might then record the vehicle’s identification and interact with the city’s stationary and drone camera arrays for continued surveillance. The robot could then determine the optimum area to set up a roadblock, and take custody without incident. All with cold, dispassionate disposition, free of the adrenaline surge that I feel leads to many of these unfortunate killings. If there is an element present in Caucasian, Latino, or even African American law officers that subconsciously or openly values the lives of young black men to a lesser degree, it would not be present in a robot unless, of course, it was placed there by the humans programming it.
But as previously indicated by the concerns over COMPAS, the possibility of human contamination in programing does exist and is the reason for my withholding endorsement of full autonomous capacity. Earlier I mentioned Predpol, which is using AI to direct policing efforts based on data that has been programmed into the system. But what if the data itself was not free of bias? What if the arrest numbers are the result of disproportionate policing in some neighborhoods? What if criminal conviction stats are skewed due to disparity in access to quality defense representation? What if victim data was skewed because the residents of a particular community do not trust the police or fear further harm if they file complaints? We have a saying in the United States, “Garbage in, garbage out.” If the data is compromised, then the resulting directions from the AI will not be free of bias.
Our humanity is our most important asset for serving the public. Our passions can be the element that fuels a desire to offer a second chance or risk our own lives to save others. But those same passions can also trigger a deadly survival response if we are panicked, enraged, or on high alert. I am a firm believer in community policing and the value of officers being part of the neighborhoods they serve. I believe it is essential they have the ability to recognize when something is out of place, which gives rise to reasonable suspicion. I believe in the importance of these officers as a sympathetic ear or a shelter in the midst of that once-in-a-lifetime crisis that touches a person or family. Whether it is delivering a baby in a taxi, comforting an assault victim, or calming a potential suicide victim—the officer’s humanity is essential. How then do we reconcile the nature of community policing with AI policing?
I have no doubt that increasingly sophisticated AI and robot technology will win out and redefine how many law enforcement functions are conducted. It is imperative that as this change is taking place we prioritize the preservation of those standards essential to the protection of human rights. The public must feel assured that AI involvement in law enforcement is regulated and governed by some standard that protects their rights.
During my tenure at FBIHQ, the FBI was embroiled in two controversial engagements which drew criticism from the public and Congress. The shootout between the FBI and the family of Randy Weaver at Ruby Ridge, Idaho, and the destruction of the Branch Davidian compound in Waco, Texas, resulted in hearings to determine whether the actions of the FBI were appropriate. During the siege at Ruby Ridge, a U.S. Marshal and Weaver’s 14-year-old son lost their lives in an exchange of gunfire after the Marshal attempted to serve an arrest warrant. Weaver’s wife also died in the ensuing conflict. At the Branch Davidian compound, four Alcohol, Tobacco, and Firearms(ATF) Agents were killed when they attempted to arrest David Koresh. The ensuing confrontation resulted in the deaths of more than 70 Branch Davidians (so named for their cult leader David Koresh). Many of those killed where children.
At the center of the hearings in both cases was the decision-making process resulting in the use of deadly force or the decision to breech. I feel the resulting visceral public outrage and predictable subsequent congressional criticism were largely due to the deaths of Weaver’s wife and child and the deaths of the large number of people including many children at Waco, which offended the public’s sensitivities. Many of us trained in law enforcement would ultimately place the blame at the feet of Randy Weaver and David Koresh for placing their biological and spiritual families in harm’s way as a result of their actions. However, that logic could not overcome the emotional public response.
This begs the question: how would the public respond to an incident similar to these but resulting from actions decided upon and taken by robotic AI? Of course, just as these incidents led to reviews by lawmakers and oversight committees, AI should be subjected to the same scrutiny and governing authorities. One such solution under consideration is the use of algorithmic impact assessments that compel full disclosure of AI systems being used by agencies, what the data is being used for, and unrestricted access to inspect those systems to ensure proper operation within specific parameters.8
I am a huge comic book and sci-fi fan, so I am particularly impressed with the company Axon for basing its AI ethics oversight on Spiderman’s mantra “With Great Power comes Great Responsibility.” In the case of this Law Enforcement Tech company, responsibility to ensure full transparency and accountability regarding its products especially with the implications AI has for civil liberties and privacy. Since this bold initiative was announced there have been suggestions that other tech giants such as Google create oversight boards of their own. Axon argues that this will help secure the media as an ally and thus the public.9
So, to summarize, I certainly acknowledge the steady march toward a modern law enforcement model infused with AI at almost every level from policing to the judiciary as well as corrections and post release; and I have no problem with that. I strongly believe that law enforcement must always avail itself of advanced technology to keep pace with those same advancements being incorporated by criminals. It must make use of any new technology that affords greater protections to responders and allows them to better protect the public they serve. But there is an inherent responsibility for those designing and implementing the system to understand not only its potential for improvement, but the potential damage it can do if exploited or poorly managed.
I believe that if used properly, AI will be an invaluable tool. Tool however, being the operative word. I do not believe anything devoid of a soul should make final decisions on life or death or removal of freedoms. In this capacity AI should work for us, not replace us. In those instances where we decide their performance has reached a satisfactory standard which justifies limited autonomous operation, we should then:
A) Allow rigorous testingto ensure no bias is programmed into the AI system. This is particularly important when the programming relies on historical crime statistics. Every effort must be made to eliminate bias in reporting and identify factors that can skew results.
B) Ensure full transparency– The public should have a reason to trust this new technology. They should not be left to feel as though their civil liberties are compromised by the use of AI.
C) Develop an accountability mechanismthat routinely inspects AI performance and activities to ensure it operates solely within designed and approved parameters.
D) Ensure the AI is not independent. We should never become so complacent that we just allow and trust AI to direct itself. I could always make decisions as a police officer and FBI Agent and as I rose through the ranks in both jobs, I gained more and more autonomy in my decision making, but there was always a level above me to which I answered. There was always a level of activity or circumstance that was beyond my authority to act without authority. So it should always be with AI in law enforcement.
The protection of life and human rights should always be at the top of law enforcement mission priorities, and the manner in which these very human issues are addressed should be forever determined by humans.
Richard B. Hoskins
Bibliography
1. Faggella, Daniel. “AI for Crime Prevention and Detection – 5 Current Applications | Emerj – Artificial Intelligence Research and Insight.” Emerj, emerj.com/ai-sector-overviews/ai-crime-prevention-5-current-applications/.
2. “Intel-Powered AI Helps Find Missing Children.” Intel, www.intel.com/content/www/us/en/analytics/artificial-intelligence/article/ai-helps-find-kids.html.
3. Faggella, Daniel. “AI for Crime Prevention and Detection – 5 Current Applications | Emerj – Artificial Intelligence Research and Insight.” Emerj, emerj.com/ai-sector-overviews/ai-crime-prevention-5-current-applications/.
4. Serena,Katie “Police Are Using New “Crime-Predicting” Technology To Monitor The Public”.All That’s Interesting.com, 12 April, 2018
5. Faggella, Daniel. “AI for Crime Prevention and Detection – 5 Current Applications | Emerj – Artificial Intelligence Research and Insight.” Emerj, emerj.com/ai-sector-overviews/ai-crime-prevention-5-current-applications/.
6., 7. Faggella, Daniel. “AI for Crime Prevention and Detection – 5 Current Applications | Emerj – Artificial Intelligence Research and Insight.” Emerj, emerj.com/ai-sector-overviews/ai-crime-prevention-5-current-applications/; and
Rieland, Randy. “Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?” Smithsonian.com, Smithsonian Institution, 5 Mar. 2018, www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/.
8. Rieland, Randy. “Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?” Smithsonian.com, Smithsonian Institution, 5 Mar. 2018, www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/; and
Levinson-Waldman, Rachel; Posey, Erica. “Predictive Policing Goes to Court” | Brennan Center for Justice, 5 Sept. 2017, www.brennancenter.org/blog/predictive-policing-goes-court; and
AI Now Institute. “Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies.” Medium.com, 21 Feb. 2018, medium.com/@AINowInstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd9856e6fdde.
9. Shead, Sam.“Google’s Mysterious AI Ethics Board Should Be Transparent Like Axon’s.” Forbes, Forbes Magazine, 28 Apr. 2018, www.forbes.com/sites/samshead/2018/04/27/googles-mysterious-ai-ethics-board-should-be-as-transparent-as-axons/#741c0f9119d.