Discerning trends in Artificial Intelligence (AI) is never easy. Anyone familiar with predicting AI trends will know historically that there has been too many false dawns and especially in recent years, too much hype.
What, therefore, might 2019 hold for us?
Judging by the pattern of events over the past 12 months, I would submit that there will be one major trend and associated with it, a number of subsidiary directions.
The major trend regarding AI development is the clear perception that AI-related technologies and algorithms have become ‘dual-use technologies – and by that I mean technologies that can have a clear positive or negative impact depending on the intent behind its use.
There has been a number of distinct activities which underpins such analysis, including a recognition in geopolitical terms that we are witnessing an AI ‘arms race’, much of it linked to robotics and autonomous weapon systems, an increase in malicious and disruptive cyber attacks on critical network infrastructure, the abuse of communications technology and data to support mega-surveillance programmes and the abuse of CRISP-R technologies in the life sciences as we witnessed recently in China.
Given the absence of transparency regarding the AI components or tools associated with the above, it is somewhat difficult to analyze progress in machine intelligence. One might make informed speculation but this is unlikely to offer a window through which to gauge if progress is being made in the evolution from narrow to general AI for example. Additionally, national security AI is unlikely to tell us anytime soon if decisions taken by intelligent machines has surpassed our ability to determine how and why decisions were taken in the first place. For many commentators of AI, observing the efficacy of machines that use deep neural networks to learn and play games – often far better than humans – is noteworthy but partial in relation to analyzing trends.
Associated with this major trend noted above, there are several associated directions or developments worthy of closer inspection.
One such factor might be that efforts to enhance existing levels of narrow AI have yet to reach a stage where a breakthrough towards general AI is evident. In saying that, this should not be read as indicating no or limited progress – that would be a misrepresentation of the facts. However, based on the criteria that AI specialists themselves have developed, there remains plenty of scope for enhancing key developments ranging from driverless cars to medical diagnostic tools and recalibrating the ‘Turing Test’.
Another factor worth considering will be the question of regulation. Much of this concern over the unregulated use and abuse of data has been closely linked to social media platforms but it is seeping into other areas of public policy, including the media, banking and finance and public health. We can therefore expect some preliminary effort at determining what such regulatory frameworks might look like and how they might be used. Of course one might anticipate resistance from ‘big tech’ companies, which of late seem to have been reluctant to entertain any form of external control and whose commercial model of cheap ‘bread and circus’ technologies, seem well positioned to deflect serious interference by regulatory bodies.
Arguably, this concern about the misuse of private data and the rise of ‘false news’ is driving the wider consideration of embedding ethics in AI products and services. In and by itself, such moves are to be welcomed but can we say what we want from such innovative ideas? What do embedded ethics look like in a commercial AI model and what powers, if any, will it have?
Another worrying trend – one that might be attractive to those whose material interest and position might be damaged by AI – could be an exacerbation of the current lack of respect for science and ‘expertise’. Focused attacks on the intent behind certain AI products might inevitable lead to a retreat by AI researchers and technicians into more specialized institutions, which in turn will lead to the inadvertent or deliberate creation of an AI elite, some of which might be locked into financial imperatives rather than technological and which could lead to mistakes down the line. For example, one could think of the research done on babies recently in China as an example of non-regulated ‘black biology’ whereby society has no transparency or control over deep research. Our universities need to foster beneficial AI but they cannot do so if they are far from basic sources of AI research and practical application.
Therefore, how might 2019 look as regards AI development?
From the above, it might not be unreasonable to suggest that this might be a year of consolidation rather than spectacular progress, although given the lack of transparency in weapons development, such an assumption might be proven badly off the mark.
Work will continue unabated to get closer to what could constitute General AI but despite notable enhancements to machine intelligence, a major breakthrough looks unlikely.
Regulation is likely to increase as a wider appreciation of AI technologies and potential, is better understood by decision-makers. This should not be necessary seen as a move to limit technology by older political and technical ‘luddites’ but rather an appreciation that as AI moves out of the lab into the home or workplace, other factors will come into play, ranging from economics and social cohesion to morals and ethics.
Some of the ‘mystique’ of AI will undoubtedly clear away as social groups begin to question the value of some of the so-called benefits of AI. Indeed, 2019 might witness a more pronounced ‘civic society’ response to some aspects of AI and the outcome of this – including more call for regulation, will reshape the ‘playing field’. AI will still be around but it will struggle to remain within a circle of trust.