21 November 2023 – WHAT IS THE PURPOSE OF AI?

by Andrew Dolan

In the wake of the announcement that Open AI had dismissed Sam Altman, its former head and one of its original founders, speculation mounted as to the reasons why.  If nothing else, in the wake of the recent Bletchley Summit on AI Safety, which Mr Altman attended as a very prominent advocate of AI safety, one might be forgiven for assuming that this merely reflected the nexus of AI and Big Business.

Irrespective of the veracity or otherwise of such speculation, a more important point might be overlooked, namely just where do we stand on the purpose of AI development today?  Issues surrounding purpose have a distinct philosophical feel to them and are not an issue that might detain us from the wider debate about the future of AI.  However, this would be a mistake.  In fact, teleology in terms of AI, has never been more important, as both the Bletchley Declaration and the Open AI boardroom spat with Sam Altman, seem to hint at.

If one examines the more recent development of AI, it seems possible to perceive the process as being spurred on by quite different and distinct drivers.  The so-called ‘Dartmouth’ community clearly envisaged AI as being part of an attempt to replicate the human brain and some of its functions, a blend of academic and scientific-technical research and development much in line with many other early Cold War initiatives.  Another thread of the development lay with those who clearly envisaged a platform for forms of AI development being associated with computers and robotics, including forms of early machine intelligence.  It is no coincidence that much of this thinking had some affinity to cybernetics, a form of technology development that one traditionally associates with military technology and conflict and which in the post-1945 nuclear environment, pervaded the more specialised and secretive elements of the military-industrial complex in the USA, UK and later, the Soviet Union.

I think despite the peaks and troughs of AI development during the latter years of the 1950s to the 1990s and the more rewarding period of the first decade of the twenty-first century, the main question, however, remains extant; what is the purpose of AI?

As alluded to earlier, the two events cited above are pregnant with significance.  The UK-hosted Bletchley Summit clearly believes that the primary aim of AI is to make money, through various forms of commercialisation.  The AI market is seen as a potential treasure trove of opportunities, benefits and even further developments.  Of course, with such a clear commercial slant, regulation of some form might be inevitable – either to avoid some of the worst excesses of the internet and social media landscape or to protect society from the potential social and economic impact of job losses to machines.  What this might look like has yet to be decided.

Similarly, developments of some forms of AI frontier technologies, based largely on both human and machine intelligence interface, is rightly raising concern about how far such developments could go, what sort of ‘guardrails’ might need to be put in place with some of this future technology and in extremis, need developers need to deploy enhanced so-called ‘kill switches’ in case of machines acquiring the ability to use their intelligence to pursue their embedded objectives in an unassumed way by their developers, thus endangering humans, even inadvertently.

Of course, as concerning as the above directions might become – should machine intelligence be developed by designers imbued by malicious intent – an equally disturbing threat might emerge from a more traditional form of technological adaptation, namely one that supports an ‘arms race’ and based on a range of evolutionary or revolutionary weaponry.  Can we foresee the eventual deployment of Lethal Autonomous Weapons in a law enforcement or national security context?

Returning to the key question of teleology, what then is the purpose of AI?  

Advocates of AI clearly believe that it can have a beneficial impact on society and the individual.  Such proponents of AI routinely cite medical advancements or environmental control as being indicative of the undoubted value they can bring.  Others of a similar bent foresee the potential in humanity partnering with AI, benefiting from forms of AI-enabled augmentation and even the eventual replacement of carbon-based bodies to a more synthetic-based host, including an uploaded brain which, some people argue, will retain the individual in mind.  Military advocates of AI maintain the principles of war can be improved through AI applications and convincingly maintain that such weaponry or the potential deployment of LAWS can make a significant impact on levels of collateral damage.

So here we have it.  AI is clearly a dual-use technology and one that requires careful monitoring.  It seems to serve various purposes, not one.  It is envisaged to be more than a tool and possibly a partner.  Safeguards are required to maintain some form of stability and transparency as to what is being developed but just how this will carried out, has yet to be decided.

This leads us back to Sam Altam and Open AI.  Can we really expect transparency within the large companies which are forging ahead with AI exploration?  Is it possible to be altruistic when so much investment and return is required to realise the dreams of the developers, no matter how well-intentioned they may be?  Should we anticipate full monitoring and transparency of developments, the ultimate purpose of which might range from making significant financial returns to commercial competition with rivals or more worryingly, the development of potentially harmful products?  Will the drive to exploit AI bring with it an irreversible influence on human behaviour such that humanity itself might be threatened?  

If a leading advocate of AI regulation can blend with the movers and shakers of global technological development and politics one day and can be removed in a less than transparent boardroom dispute the next, do we really understand the purpose of those developing AI and even if we think we do understand it, what can we do to influence the process.  Back to Bletchley and watch this space.

print