-
A recent BBC news item on the death of a former Soviet Air Force Colonel* was a timely reminder – should one be necessary – of the perils of removing humans from the decision making ‘loop’ associated with strategic weapon systems.
- ‘Stanislav Petrov, who averted possible nuclear war, dies at 77’ at bbc.co.uk/news/world-europe-41314948
Back in 1983, Stanislav Petrov was the Duty Officer at the Soviet Union’s nuclear early warning centre when the computer systems indicated that they had detected incoming American ballistic missiles. His decision to ignore the computer alerts and inform his higher authority that this was a false alarm was both brave and correct: brave in that defying protocol in such events, he stood to face serious charges of dereliction of duty had he survived and correct, insofar as the subsequent investigation concluded that the Soviets’ early warning satellites had erroneously interpreted natural cloud phenomena and sunlight as the engines of American missiles.
Is it possible to imagine such a scenario today?
Back then in 1983, the exploitation of intelligent machines, whether these were sensor platforms or information processing algorithms, was aimed at enhancing decision-making and the speed of reaction. Machines that think faster than humans were believed to offer a ‘force multiplier’ of sorts. Indeed, in relation to nuclear weapons, the process of reducing or diluting human control would also, arguably, lead to the eradication of emotion, thus preventing the hesitation that Petrov had ably demonstrated.
Today, as commentators speculate on the benefits of AI on weaponry, it is not unusual to hear such arguments being regurgitated. Weapon systems that are intelligent and emotionless are less prone to disobeying instructions or conducting operations that are not sanctioned. In essence, machines cannot be wrong, only their human programmers and operators.
Yet as the 1983 scare has demonstrated, machines can all too easily malfunction. When you think about it, humans make mistakes all the time and therefore logically, programming and operating can also be erroneous.
However, the circumstances, which allowed Stanislav Petrov to disrupt the system, might no longer apply with regard to the Lethal Autonomous Weapon systems (LAWS) of the future. In the first instance, deep learning and neural systems might be sufficiently aware of imprecision of task fulfillment and seek to use their own ‘intelligence’ to self correct or improve – in short, improve their efficacy through enhanced learning which operators might struggle to understand and influence. Secondly, this ‘recalibration’ might be logically undertaken through a well-established method of self-improvement, namely trial and error. By definition, error might be considered appropriate on the way to establishing a more accurate or effective way of fulfilling the mission.
Designers of LAWS might disagree and they might be right to do so. Furthermore, they might point out that human control of machines that support instantaneous reaction, especially in relation to nuclear weapons, remains the default option for the states that possess them. History, however, might suggest that considerations of a totally autonomous command and control system was a feature of the Cold War – the Soviet Union certainly contemplated just such a system. Might we not see its like again, only this time supported by super machine intelligence?
As we move closer to a global debate on the possible banning of LAWS, there is an urgency to be better informed of the potential consequences of certain AI developments in the security field. History tells us that machines do make mistakes: some from human error, some by technical malfunction and possibly in the future, from the very machine intelligence that we humans endow in the LAWS but which we might not, indeed cannot, fully comprehend. Indeed, who is to say that future LAWS or even the technology that controls nuclear weapon systems might not be inspired by Wittgenstein’s ‘private language’ argument and seek to develop a new paradigm for machine communication that we simply do not understand nor can translate? Perhaps those who foresee existential risk on the back of such a development might have a point. Certainly scientists like Stephen Hawking and technologists like Bill Gates and Elon Musk seem to think so.