By Andrew Dolan 20 July 2021
A studied reflection on current trends in Artificial Intelligence (AI) at the moment would suggest some insightful trends. These trends are by no means representative of all of the technical developments in the field – but perhaps they do generate some issues, which require elucidation or at least comment.
Arguably the most important aspect of the grand narrative is that the debate on whether or not AI is an existential risk has seemingly gone to ground. Some might say and with some justification, that the risk from global pandemics is a here and now reality and as such, has more immediacy, both now and in the future. Certainly the need for science and technology to focus on supporting medical responses to the current COVID pandemic has somewhat distracted and diluted the debate on AI as a pressing challenge. There also seems to be less hype surrounding AI as an existential risk from those who, until recently, were keen to inform the public of the risk of what could turn out to be ‘the worst event in the history of our civilization’, namely the uncontrolled development of AI.
Similarly, those critics of AI who casually and incorrectly linked AI and robotics development with the creation of sentient machines with over active neural networks have also modified their position, if not in terms of the future dystopian risk of a robotic world then at least regarding the probable date for when this might occur. Perhaps this modification in the prediction of the future has been influenced by a number of studies from within the scientific community, which have downplayed the future of intelligent machines, never mind robots.
This line of argument should not be seen as a refusal to concede that at some indeterminate date in the future, AI research and development will make a substantial breakthrough in robotics. The point is not that it is impossible – rather it is unlikely until some major technological breakthrough occurs and which demonstrates a deeper insight into issues such as the concept of intelligence, the functioning of the brain or the alignment problem. When will an intelligent machine begin to use our mind maps?
Associated with the general issue of robotics is the undoubted forging ahead with the exploitation of AI in weapons development. It would seem that the community engaged in supporting some form of prohibition against the development of Lethal Autonomous Weapon Systems (LAWS) has made precious little progress. Indeed, the deployment and successful use of drones in the recent military clash between Armenia and Azerbaijan, has done little to dissuade military communities, that incremental AI enhancements of weapon systems is a vital military objective.
However, the trend most obviously seen but perhaps not yet fully understood is the worrying role that AI might play in the development of information technologies, which are contributing to a less than healthy future. The warnings of so-called ‘surveillance capitalism’ are already with us but this is being augmented by popular resistance to forms of medical surveillance, ranging from track and trace systems to vaccine passports.
Perhaps such developments are worrying because they seldom seem to penetrate the mind-set of large sections of our society. The fact that so much of our daily existence is now ‘linked’ or perhaps more accurately, ‘networked’ to grids and matrix and driven, influenced or controlled by complex algorithm has yet to be fully assessed in terms of behavioral modification. So much of our current and future daily lives stand exposed to modification at one end and risk at another, whether this be from a failure of critical network infrastructure to an inability to network at all.
Perhaps the current lesson to take away at the moment on AI is ‘the dog that didn’t bark’ – namely the robust development of public policy debate on AI. We are not by any means in an AI ‘winter’ but the lull in spectacular technology races and leaps should offer us an opportunity to reflect more on the real story of AI development – the incremental and evolutionary exploitation of small but significant developments that are building up a significant modifying position on much of what we take for granted about how we live. A failure to grasp meaning and relevance of this moment would be a wasted opportunity that we are unlikely to get back once it’s passed.
Developing a public policy philosophy on AI now seems logical and timely.