A recurring aspect of Artificial Intelligence (AI) debate – at least in the popular media – is the fear of existential risk. Figures like Stephen Hawking, Elon Musk, Bill Gates and Nick Bostrum regularly feature in most discussions on a potential dark side to AI, namely that unless the development of certain types of AI is better understood and rigorously controlled, then humanity could be facing a bleak future.
Typically, such warnings are reinforced by the tendency to view the AI existential risk alongside other equally terrifying threats emanating from nuclear conflict or global pandemics or even earth being hit by an asteroid. Taken together, they are all plausible, although quantifying the risk, differentiating the probable from the possible, is less clear-cut. Nevertheless, the underlying theme is humanity’s vulnerability and AI could clearly become a contributor to this vulnerability.
Therefore perhaps the fundamental question is this: just how accurate are these claims?
Perhaps we need to be clear about who is actually making these risk assessments, to evaluate the system upon which they are being made and to consider the influences shaping the process.
The individuals mentioned above represents a judicious blend of scientist, researcher and technical innovator. They have more than a passing acquaintance with the subject and more to the point, they are investing heavily in the field of AI and ‘safeguarding’. Indeed, there are other specialists who would tend to agree with them.
Yet for all its weight, this assessment of a pressing existential risk might be imbalanced. For example, by far the majority of specialists working in the AI field does not contribute to any systematic assessment of risk and if they do, there is every chance it might be localized to their particular field of research or development.
Of course there are other commentators contributing to this process, perhaps not all technical in background but nevertheless offering some insight into the more general appreciation of risk. For example the science fiction community have for many years brought us dystopian AI futures ranging from Azimov’s ‘I Robot’ and Arthur C Clark’s infamous computer ‘HAL’ to Aldous Huxley’s ‘Brave New World’ and Philip K Dick’s sentient androids in ‘Blade Runner’. Not to be outdone, TV and cinema have brought us AI existential risk through the infamous ‘Terminator’ and more recently, ‘Westworld’. The subliminal messages of existential risk associated with AI are often overstated or conflated with other associated risk and often people have difficulty distinguishing between science fact and fiction.
It would be unwise, however, to totally ignore such representations as often fiction can and does provide insightful glimpses into future technologies.
A similar case can be made concerning commentators from the philosophical and theological communities. I think it was Aquinas who said that ‘Only God Creates’ and even a cursory examination of the thirteenth century friar’s work on creation, on intelligence and crucially, what constitutes humanity will see the basis of many questions for today’s AI debate. In more modern times, the writings of Heidegger hinted at the malign influence of technology, which for him was damaging man’s ‘authenticity’ and the work of Berkley academic and philosopher Hubert Dreyfus, who made the early claim that AI was akin to ‘Alchemy’. In addition, Pope Benedict often criticized technology, especially in light of developments in life sciences, which are seriously indebted to AI, as he feared that humanity itself would become an endangered species and society became enthralled to must have designer technologies which served our instant gratification.
Irrespective of the merits of such contributions to risk – and to be sure many of the writers or dramatists associated with the above would not see themselves as doing such – arguably the more pressing question is how risk is assessed?
There is no shortage of risk assessment systems but in terms of AI and existential risk, no one single system seems paramount. Yet for an assessment to be robust – for good or bad – there must be some methodology that people can use in order to come up with a reasonable appreciation. For example, how does one differentiate between threat and risk? By what means can we determine the validity of information in an age of ‘false news’ and ‘false truth’? Where should we get our information in the first place?
Encouragement needs to be offered to those sufficiently interested in AI risk to start contemplating a specific AI risk analysis and assessment process. A more structured approach and indeed a more holistic approach should not be beyond society. Even a cursory examination of the varied but informed comment on the risks associated with AI across a number of disciplines will highlight the benefits having a system that can differentiate between the short and long term and the general AI and super intelligence. More importantly, can we create a system of indicators and warnings associated with the management of this risk?
It is also important to acknowledge that, even despite no formal risk assessment procedure in place, society has every right to worry about some developments associated with AI, even if it is less acute than the more existential warnings. The ethical risk inherent in AI-related gene sequencing and mutation cannot be casually ignored. Neither can the potential impact on AI on the employment prospects for millions of people globally. Possibly the most urgent risk is that associated with the development of Lethal Autonomous Weapon Systems. Discussions over the last three years at the United Nations has moved the international community no closer to the prohibition of these weapons – many of which are already here – and the risk of such weapon designers taking advantage of developments in other cutting edge technology, such as nanotechnology, propels us to consider the implications for humanity’s well-being.
As the promise of AI seems to be coming to fruition, there will inevitably be two competing visions of humanity’s future: shall we live in Shangri-la or the Tower of Babel? If it’s the latter, we need to know now so we can start digging ourselves out of a whole. However, we will be unable to do this without a rigorous and holistic risk assessment procedure.
This presentation was given by Andrew Dolan on 8 December 2017 at the AI Forum Hungary, Budapest