5 May 2020 – by Andrew Dolan
The journal ‘WIRED’ ran a feature on 25 January this year entitled ‘An AI Epidemiologist Sent the First Warnings of the Wuhan Virus’. The story recounted how a specialist ‘health monitoring platform’ in Canada used a particular AI supported system, named ‘BlueDot’, to provide early warning of the pandemic in early January 2020.
If you remember when we launched the AI Forum Hungary, one of the first issues we addressed was medical AI. Our guest speaker at the time, Dr. Nathaniel Hupert, explained a number of AI developments including such early warning public health systems.
Like most developments of this type, there is a heavy reliance on capabilities based on natural language processing and machine learning. In the case of the ‘BlueDot’’s AI-driven algorithm, the basic concept was to search global media outlets, airline systems and animal disease or phytosanitary networks for indicators and warnings that might suggest some form of high impact public health event.
Although perhaps not perfect, such a system as ‘BlueDot’ clearly demonstrates the value of AI-inspired technologies, of ‘Beneficial AI’. It seems to be a useful predictive intelligent tool and is not intrusive in terms of private data, although this might be disputed, particularly in relation to individual travel. However, it clearly demonstrates that such systems are only likely to be enhanced in the future and might become a standard feature of global public health early warning systems. They are attractive as they maintain humans in the analytical loop.
The Forum also addressed the use of ‘apps’ in a medical and clinical setting. Attention focused on the seeming attractiveness of personal items such as phones or watches to record individual clinical or general medical data, or to aid logistic or clinical systems management in hospitals.
Perhaps unsurprisingly, the reconfigured use of such ‘apps’ is likely to become a standard component of public health monitoring as countries seek to ease ‘lockdown’ measures in the wake of pandemic surges. Whether it is a modified Apple or Google feature or a more simplifed system, the basic function is to serve as a contact-tracing platform to identify Covid-19 hotspots or an individuals proximity to someone who has self-identified as having symptoms or perhaps recovering after infection.
Clearly such systems raise privacy concerns –what is the scope of data that is harvested, what use is made of the data out-with immediate contact tracing and how long should it be stored? Ethically, this development of AI support systems is not neutral and is arguably symptomatic of a more deep-seated concern regarding public surveillance, even if in this case, the motive is benign.
As we as communities recover and emerge from the pandemic environment, it is more than likely that our AI community will begin to prepare for the next one with smarter predictive powers being at a premium. In addition to predictive algorithms, emphasis will also be placed on enhanced individual surveillance allied to a more intrusive set of personal clinical data. Wrapped in a public health ethos, such initiatives are not only likely to be supported but will attract greater public funding.
Yet imagine if the ‘science’ is divisive on the taking of certain actions in response to a future pandemic and public acquiescence is fragmented, might such innovative features of a future public health surveillance and early warning system not be perceived as something more sinister?
Where should the balance lie in such AI-driven developments? This is a very acute question – today we are considering public health but tomorrow it might be law enforcement.
The AI Forum Hungary has already created its Ethical AI Group where we are studying the most appropriate way to introduce this theme in schools. Perhaps after the Coronavirus pandemic, such ethical issues need much wider discussion as it seems likely that in the wake of this public health crisis, society as we might have known it is transforming – perhaps even faster than we anticipated.