Not long after his death last year, Stephen Hawking’s final literary contribution was published under the title ‘BRIEF ANSWERS TO THE BIG QUESTIONS’. In chapter 9, he returned to a subject that had much vexed him in his later years, namely ‘Will artificial intelligence outsmart us?’

Hawking was known to harbour deep reservations about some aspects of artificial intelligence (AI), although it is also fair to say that he greatly welcomed the potential benefits to humanity that he and many commentators envisaged.

These reservations, by and large, focused on what might be described as ‘Artificial Super Intelligence’ or a future context in which machine intelligence far exceeds human capabilities and has the capacity to rapidly self improve. Indeed, as he describes in his posthumous book, “AI could develop a will of its own and which could be in conflict with ours.”

Such views were neither unique nor indeed new: Hawking was, after all, one of the leading voices along with Elon Musk and Bill Gates who in 2015, called for a more informed debate about the future risks associated with some forms of AI.  Several years on from this wake-up call, is it possible to assess how such warnings have impacted on the AI debate?

Judging by the recent discussions at the United Nations related to the use of lethal autonomous weapon systems (LAWS), to take one example, there seems to be reluctance on the part of states to ban or even control such weapons.  Admittedly, Stephen Hawking did not engage his considerable intellectual capacity into developing a comprehensive and technical response to this AI-related development but his sentiment was clear – down the line, an existential threat might emerge.  To date, however, nothing seems to have changed and a LAWS ‘arms race’ cannot be discounted.

Arguably, however, Stephen Hawking’s most significant contribution to the AI debate was to encourage and foster the development of specialist think tanks and research groups to study the risks associated with AI.  His book mentions two in particular, the Future of Life Institute and the Leverhulme Centre for the Future of Intelligence, as examples of rigorous research based on an inter-disciplinary approach.

Similarly, the move by ‘Big Tech’ to develop in-house ‘ethics boards’ would also found favour with Hawking.  The notion that an advisory body, comprising technologists, scientists and philosophers could fully contribute to a concept of embedded ethics within a commercial organization whose corporate AI model is heavily influenced by profit is far from usual and quite innovative.

Yet, how much did Hawking’s concern, as expressed in his last book, really answer the ‘Big Questions’?  

It could be argued that Stephen Hawking tantalizingly raised the most central of questions at the outset of his chapter but failed to develop it in detail.  “Intelligence” he stated “is central to what it means to be human.”  Can we really conceive of AI as in any way actually having the range of human qualities we normally assign to our own species ranging from abstraction to empathy?  What can we say about AI intention?  Will intelligence evolve beyond biology?  Is a new form of evolution ahead of us?

It would have been fascinating to learn what the great scientist would have said about such aspects of humanity.  Stephen Hawking was no philosopher but his curiosity was legend. It is quite possible that one of his legacies might be that in supporting greater research on AI and encouraging an ethical approach, he has opened the door to a much deeper form of philosophical investigation.

There is no doubt that the concerns expressed by Stephen Hawking concerning AI remain extant.  The ‘Big Question’ has been probed but not fully answered, if indeed an answer is possible.  However, failure to think about the consequences of AI’s development might leave us in a position where we have to ponder whether it can be controlled at all.

print