ChatGPT AND THE RED FLAGS WAVING by Andrew Dolan – 2 April 2023

Once again, Elon Musk has been grabbing the headlines with his latest warning on the consequences of uncontrolled AI.(1)  His support of an open letter from the Future of Life Institute by academics and tech industry representatives requesting an immediate pause to certain type of AI development, has focused attention on the likes of Open AI’s ChatGPT software, which those concerned believe is displaying indicators of human level intelligence. (2)

Are such concerns justified?

Part of the answer could lie in a recent ABC news interview conducted with Open AI’s Chief Executive, Sam Altman and the company’s Chief Technical Officer, Mira Murati.(3)  A careful review of what was said and crucially what was not said, might offer some clues as to why Musk and tech industry insiders are so worried.

In short, as Altman confided in the interview, ‘we are a little bit scared’.  In a thoughtful and at times slightly fearful interview, the CEO sought to temper concerns with a positivity counterbalance.  He stressed the enormous potential for such AI to do good and cited some examples, including the curing of every disease, the empowerment of every child through super education and the creation of new jobs and job-creating tools.  Although not quite utopian, these predictions are quite attractive.  

Yet on closer inspection, these rather general and abstract claims need to be stress tested; what kind of world would it be, if one could extend life and cheat death once all illness has been eliminated?  What do we do with every child educationally enhanced in order to provide a satisfying future that sustains all ambitions?  What is the time lag between losing hundreds of millions of jobs globally to these new AI developments and the promise of new and better jobs, even although the nature of these new jobs cannot as yet be identified? 

It would be futile to try and spurn technologies that can make a significant positive contribution to humanity’s development but learning something of the cost of this development and the potential friction it might generate along the way is not inconsequential or obstructionist.

I can certainly see why the scale of such developments might be scary to contemplate.  Being scared here is not unreasonable.  Yet, somehow I don’t think this is what was scaring Altman and his team of developers.

Actually, I think he was scared by ‘other consequences so terrible we can’t even imagine what the could be.’  Altman provided some examples of what this might look like, including massive disinformation, significant cyber attacks, racial bias and a breakdown in trust.  However, such manifestations of the malicious use of AI are here already and certainly don’t meet the criteria of being unimaginable.  In short, the problems we should be really worrying about where never spelled out.  I wonder why?

What we learnt was that the development of AI systems in some new areas might attract or have attracted ‘bad outcomes’, ‘negative outcomes’ and ‘big harms’.  Deepening the concern was the seeming inability of the new systems creators to predict what might happen but that society has little time to adapt to these potentials.  What is meant by this?  

By referencing the ominous political and security commentaries on the future impact on certain AI technologies, Open AI’s CEO noted that authoritarian states were equally determined to secure such systems and technologies.  Additionally, in framing this concern along with the obvious fact that systems are created and developed by humans who feed their concepts of value and ethics into their creation, in part to enable each of us to have a machine intelligence created to mimic our likes and dislikes, our priorities and values, is dependent on that same set of features being benign and not malign.  Perhaps this begs the question of how far creators with malicious intent might be on the road to deployment of such systems and how adequate are the current levels of knowledge protection from intangible technology transfer?

In acknowledging dialogue with government, Open AI believes that responsible regulation – global as well as local – will be essential going forward.  Given that Altman refused to state if we would shut down such systems should they display some signs or evidence of harmful acts – he would only agree to slow down the development – then asking what gives the designers the right to ‘play God’ is not unreasonable.  In essence, we have a very small coterie of very powerful individuals helping to create new worlds in their image and with their concept of good and bad and their belief systems.  In a sense, we seem to be creating a parallel universe of machines.

Those interviewed speculated on the point of no return, although it was not evident what that point might be.  However, on the face of it, pausing certain lines of AI systems development for a short period to allow society time to adapt to these new developments (if we knew exactly what they might be) or to regulate, makes sense.  Better still, perhaps Open AI might share the outcomes of their ‘Red Teaming’ activities, so that society might judge for itself what the unimaginable consequences might be should we fail to slow down and take stock of the cost of unpredictable progress.

Notes:

  1. https://www.telegraph.co.uk/business/2023/03/29/control-ai-threat-civilisation-warns-elon-musk/
  2. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
  3. https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122
print