An interesting development in the AI field of late has been the upsurge of interest in having philosophers, particularly ethicists, embedded in the development process of machine intelligence.
The creation of Google’s Deep Mind Ethics and Society, including a group of eminent thinkers or ‘Fellows’, is seen as an early indicator that the ethical and moral dimensions of new machine intelligence should not be overlooked in the rush to reach the frontier of new technology and products. However, it would also be fair to say that it mirrors the disquiet in some corners as to where these new technological boundaries might lie and the possibility that the consequences of some developments in artificial intelligence might not necessarily be beneficial.
It will be fascinating to see how this works out. It was, after all, the Berkley philosopher, Hubert Dreyfus in the 1960s, who famously described and dismissed artificial intelligence in its early days as ‘alchemy’.
Yet Dreyfus was not merely a lone voice. Like many good philosophers, he ‘stood on the shoulders of giants’ and would have been the first to admit that he had been greatly influenced by some famous predecessors, such as Martin Heidegger and before him, Edmund Husserl and to a lesser extent, Emmanuel Kant. Heidegger especially concerned himself with the negative impact of technology on man and mankind: for him, aspects of technology diluted the ‘authenticity’ of man as he repeatedly stressed in his famous work ‘Being and Time’. Dreyfus himself would often criticize the concepts of AI that would flow from the parameters of Cartesian dualism.
One wonders also as to what AI developers would make of the work of Ludwig Wittgenstein and his philosophical musing on the concept of ‘private language’. Last year, Facebook had to shut down two AI computers that started to talk to each other without being programmed to do so. Additionally, the language they used was their own – it was private. Could philosophy have something to say on the implications of such unexpected technical ‘glitches’ – either in terms of linguistic precision in programing or interpreting the unanticipated in mechanical predictability or cause and effect?
Perhaps if AI developers are sincere in sourcing the wisdom of the ethical mind, might they not go deeper into philosophical archaeology and consider the work of the great medieval mind, Saint Thomas Aquinas? His thirteenth century magnum opus, the ‘Summa Theologica’ synthesized a significant range of issues, which not only influenced his peers but also a wide array of contemporary philosophers and ethicists.
For example, consider his teleological perspectives based on creation, his views on what constitutes intelligence and intellect within a wider epistemological debate and the abstract concepts of soul and virtue, which is a crucial aspect of that, which we believe, makes humans unique. With the introduction of the debate on the future of AI and sentient machines, how can we avoid acknowledging the debt we owe Aquinas?
The important question for AI developers must be how best to utilize philosophical enquiry. One would like to think that it could enlighten the considerations of difficult concepts such as cognition, sense perception, neural networks, probability and intent. Possibly, AI developers would find enough to detain them in the works of the likes of G.E.M. Anscombe, Peter Geach, John Searle, Gilbert Ryle, Charles Taylor and Alisdair Macintyre, if the analytical mind and questions of intention and purpose are critical factors.
On the other hand, one could also recommend an alternative approach to the question of the development of some future artificial general intelligence – less on an epistemological context but on a more nuanced perspective of capturing human ‘authenticity’ and where the response to natural phenomenon might shed light on what is real. Taking their lead from Heideggerian ‘authenticity, surrealists such as Jean Paul Sartre, Maurice Merleau-Ponty and Emmanuel Levinas, could AI developers identify an alternative nexus of science, technology and ethics?
However, the AI community must also be aware of the warnings inherent in some philosophical investigations. There will be some philosophers who consider enhanced machine intelligence ambivalently or worse, with alarm, unless the developer can convincingly ensure ethical ‘governance’ of machines. The rise of the AI think tank, which specializes in beneficial AI, reflects these concerns, whether they are inherent in life sciences, the evolution of lethal autonomous weapon systems or the encouragement of developments in the ‘internet of things’. Harvesting philosophical perspective in such places has already led to a significant level of public and AI community interest in the thoughts of philosophers such as Margaret Boden, Nick Bostrum and Max Tegmark to name but a few.
Such warnings are important. Whilst Hubert Dreyfus focused his criticism through learned and scholarly endeavor, others similarly concerned were less sanguine. Think of that Berkley contemporary of Dreyfus, the mathematician Ted Kczynski, who so feared the pernicious influence of modern technology on mankind that he dropped out of society and eventually engaged in a long campaign of terrorism to highlight this concern. The so-called ‘Unabomber’ felt unable at the time to structure his warnings peacefully, although he was able to produce a manifesto, ‘Industrial Society and Its Future’ in which he set out a philosophical warning.
Undoubtedly, introducing ethics to AI developers has merit, although it would be unwise to think that philosophers have all the answers or any answers. It was Henry Kissinger in a recent article in ‘The Atlantic’ entitled ‘How The Enlightenment Ends’ who expressed deep concern at the implications of the development of AI, which he argues “has generated a potentially dominating technology in search of a guiding philosophy.” For him, philosophy most definitely should have a place at the table.
Yet when did mankind ever have a guiding philosophy? Arguably we might have to return to Thomas Aquinas and the Thirteenth century, a time when he felt able to synthesize what was known of Greek, Jewish, Islamic and Latin knowledge into an overarching framework of life. Perhaps AI and philosophy can go back to the future!