THE CHALLENGES OF A NEW PHASE IN AI DEVELOPMENT by Andrew Dolan – 28 May 2023

Judging by the level of media interest, the development of AI has seemingly entered a new phase.  In the wake of a series of media interviews and public demonstrations of concern by leading AI-related figures and most recently, senate hearings in the United States, a feeling exists that some sort of development ‘Rubicon’ has been crossed.

In particular, comments by leading officials at Open AI and the resignation of Google’s Geoffrey Hinton in order to speak openly of his concerns about recent AI research and development has in part stimulated fears and led to calls for government regulation and some form of moratorium on AI research linked to forms of generative machine intelligence.

Of course, based on these developments, should we be asking if these concerns have just emerged now or were present for some time?  Were there any indicators and warnings of the potential for negative outcomes associated with this form of AI research?  Given the lack of remorse about the potential negative consequences of this development, it is perhaps not surprising that few commentators are calling for an outright ban on AI development.

Indeed, based on the public debate, which has ensued since the initial concerns were raised, there is obviously a balance to be struck between concepts of beneficial AI and the potential for harm.  Discussion of the obvious benefits of a range of AI-enabled applications, ranging from health to the environment are positioned alongside more abstract fears including principles of values alignment in relation to machine intelligence, issues of control and arguably more pressing, the consequences of an evident AI ‘arms race’.

For some observers, the question is moot: what’s the point of machine intelligence that can overpower disease or global poverty or global warming if it can equally be used to destroy civilization and possibly the planet with it?  What exactly is it we are making?

Perhaps a clue lies in one of the many interviews given recently by Geoffrey Hinton, who argues that whatever we might have made in terms of artificial intelligence, it is not a replica of biological intelligence but more problematic, digital intelligence.  I’m not sure that the early pioneers of AI, including Hinton, had this in mind when they embarked on their research odyssey.  

Apart from developing a new form of intelligence and with it, most likely, a different ‘reality’, a digital reality that humans seemingly cannot yet envision or understand, we now have to come to terms with the fact that the early disquiet around ‘deep fakes’ and the promise of individuals having their own form of individual machine intelligence or ‘ChatGPT’ is but a possible foretaste of things to come, much of which is unwelcome.

Should new levels of AI – for many, levels that are possibly on the cusp of achieving or approaching artificial general intelligence, become available, can mankind really ignore solutions to existential level crises? Probably not.  Yet how sure are we that the solutions brought forward by machine intelligence can be fully understood and that all consequences can be fully appreciated?  For example, given the level of mistrust shown in regard to vaccines during the COVID pandemic, vaccines still produced by humans, how can we be sure that future medicines designed and manufactured by machines will be any more acceptable, especially if we really are not in a position to account for their design and efficacy?

It is becoming obvious that these new AI developments are accompanied by a set of ‘big questions’ for humanity.  These questions are philosophical, not technical, in nature.  They will also impact on what kind of AI research might be required to sit alongside more traditional ethical and technical considerations.

A prime area of study will be the need to train a corpus of specialists to work on the understanding of digital intelligence including how it works, how it influences behavior and how it relates to biological intelligence. Similarly, development in this area of research should stimulate further enquiry into the synthetic approach to alignment and control.  Hopefully, in this regard, we are not too late and the option of ‘disconnecting’ unwelcome forms of digital intelligence remains a live one.  

Society seriously needs to invest also in appreciating how best to understand the contours of a so-called ‘AI Arms Race’, including the need to create a reliable AI arms control culture, whether this be international treat-based structures or some form of arms control and counter proliferation regime.  It is understandable that talk of arms races is often linked to the nuclear domain.  Certainly arms control regimes have to some extent helped dampen arms rivalry but sadly, states and non-state actors still seek to acquire the power and influence that such weapons seem to offer.

More speculative forms of research might include a number of AI-related activities, which touch on some fundamental perspectives on what it means to be human.  One would involve the nature of biological and digital intelligence partnerships and how this might impact physical human/machine interfaces in all their potential permutations.  The other form of speculative enquiry relates to concepts of human augmentation through digital intelligence means.  Should this novel form of chimera be regulated or are we heading for a form of human/machine augmentation that presages an early form of evolutionary change? 

In a recent commentary on AI, the famous historian, military theorist and ‘realist’ political theorist, Henry Kissinger stated unequivocally that answers to many of the problems above would only be addressed at the philosophical level.  As ever, a prescient perspective if ever there was one.  

print