Reading Stuart Russell’s new book, ‘Human Compatible’, which ventures into the depths of superhuman machine intelligence, I was reminded of an earlier work of science fiction, which addressed humanity’s interface with non-human intelligence.
John Wyndham’s ‘The Kraken Wakes’, a typical 1950s science fiction thriller, the global community is facing up to a threat from the seas, itself brought about by some form of non-human intelligence. Early on in this encounter, a key scientific ‘outlier’ suggests that a profitable way to engage is by adopting a pacific and accommodating stance.
This proposal and the novel’s discussion around it is neatly summed up in a rather stark statement:
“ . . . it’s a matter of instinct, not reason. The instinct of self-protection is opposed to the very idea of an alien intelligence – and not without pretty good cause. It’s difficult to imagine any kind of intelligence, except a sheer abstraction, that wouldn’t be concerned to modify its environment for its own betterment. But its very unlikely that the ideas of betterment held by two different types would be identical – so unlikely that it suggests a hypothesis that, given two intelligent species with differing requirements on one planet, it is inevitable that, sooner or later, one will exterminate the other.’ The Kraken Wakes. (P.68)
Certainly Stuart Russell is making no claims that superhuman machine intelligence is simply biding its time before ‘breaking out’ and taking control. After all, critics would argue, artificial intelligence is not alien intelligence – man will most likely determine the limits of his own destiny.
Yet the juxtaposition of the current concerns about machine intelligence and fictional exploration of alien intelligence over sixty years ago does offer some food for thought.
It might not be far-fetched to suggest that when we eventually do approach the critical step of creating superhuman machine intelligence, then we are in fact facing an alien intelligence, especially once this intelligence uses its capabilities to design and produce subsequent generations of machines and applications.
It also begs the question as to how we should respond to such circumstances, a situation which Russell’s work faces square on. Will it be accommodation or switch off? Historians writing about the initial ‘first encounters’ of different people or groups or crucially about the struggles of competing parties seldom fail to highlight the often ‘zero-sum’ struggles that have formed and shaped the history of mankind. Traditional concepts of fear of the unknown, suppression, control, predominance, slavery and ‘Darwinism’ might offer a clue to the direction of our responses to a fearsome intelligence.
Will subsequent generations of superhuman machine intelligence have their own consciousness about their struggles and ‘extermination of the other’?
Whilst there have been many notable examples of historical accommodation and engagement between ‘others’, suggesting that finding a ‘modus vivendi’ can be achieved, it still has to be recognized that this is rarely the default position and that suspicion and hostility is rarely far from the surface of ‘alien’ accommodation.
In terms of thinking now about the eventual arrival of artificial or alien intelligence, such consideration is far from idle speculation. We are often brought up by concepts such as the so-called ‘kill switch’ for machines but have we framed a discussion on ‘accommodation’?
Machine designers and AI ethicists will rightly highlight the issue of programming as one area where we still have control. Surely we can reduce fear and uncertainty through the programming of ‘good human attributes’, although agreement on what this might be is far from settled? How does one create ‘compatibility’ for example?
One of the major problems I foresee in the use of preemptive coding to ensure that machine intelligence is that humanity simply seems incapable of determining what attributes constitutes us – in short, what does it mean to be human? What ethical baseline shall we use to help shape this quest? How do we code machines now to reinforce the concept that man is harmless in the further hope that superhuman machine intelligence inherits these features and traits of the creators? An even more awkward question is how this fits alongside the more harmful (at least to machines) of inbuilt ‘kill switches’?
In his novel, John Wyndham, writing during the early ‘Cold War’ years and against a backdrop of atomic bombs and the fear of global annihilation, believes that man is by default, not built for accommodation with the unknown. Maybe we are approaching our time to face alien intelligence and perhaps the time available is less to do with developing algorithms and more to do with educating mankind to the stage where accommodation is an option to consider alongside more traditional concepts of dominance.
The problem of course is that we might not be familiar enough with the concept of accommodation and what it might have to look like against our backdrop of not really knowing how much time there is left to speak the same language as the new machine arrivals.