by Nathaniel Hupert, MD MPH
People who are either very lucky or very unlucky don’t have a doctor. If you do have one, chances are she or he is intelligent, at a minimum in the sense of being competent at carrying out an integrated, purpose-driven set of deliberative functions aimed at maintaining or restoring your health. (If not, get a new one.)
The question facing advocates of the “AI Revolution” in medicine is whether it makes sense to trade in all or part of this intelligent human doctor for an artificial facsimile of one (or more precisely for machines that can optimize one or more of those deliberative functions). Leading journals have published well-meaning editorials on the subject in recent years, identifying classically difficult aspects of medicine—diagnosis, prognosis, imagery interpretation—as the proper foci for research in artificial (or, as the British say following Turing’s lead, mechanical) medical intelligence. The underlying framework for these commentators’ worldview about health care delivery is decidedly non-holistic and, potentially, contrary to the hard-won lessons of other complex systems such as supply chain mechanics. These AI proponents hold that if we optimize the component parts of medical care, then the whole must be in some way improved as a result. This simplistic “parts make the whole” vision violates several age-old adages of operations research, the most pithy of which is “Local optimization leads to global disharmony.” The implication is that if we want to improve the delivery of health care whilst benefitting from remarkable advances in computer-aided medical science, we need to re-think what part of medical care we would like to make “artificial” and what part we should retain as decidedly human.
Astute commentators on the coming “data deluge” in medicine understand that physicians will need improved systems for compressing and pre-analyzing data from multiple new sources (electronic medical records, hand-held or wearable sensors, social media, etc.). Sources of this data that are now lumped under the heading of “AI” (e.g., automated, optimized diagnostic algorithms and therapeutic recommendations) are simply one piece of this rotating universe of information that intelligent physicians are supposed to bring to bear on clinical activities. To help conceptualize a new role for mechanical assistance in health care delivery, an analogy to cosmology may help. There are new star map applications for mobile phones and tablets that, with the tilt of an arm upward, seamlessly transform data (dots of light in the night sky) into information (constellations, planetary orbits, satellite trajectories). These programs often go the extra step of explaining the history and meaning of constellations, predicting what the sky will look like in the future, and explaining why certain celestial objects look the way they do (delving into astrophysics and other sciences). Thus they move the user from data to information and finally to interpretation and meaning-making about the cosmos, all in the palm of one’s hand.
Medical care, however, has no such “magical” (in the Asimovian sense) transformative tool. In fact, the analogy fails at multiple points. The lived experience of the typical physician would be that of a person standing in a foggy field on a cloudy night with occasional glimpses at distant lights that may or may not be celestial bodies (patients not contacted or locatable, tests ordered but not reported out, reports sitting in inboxes or on desks without proper notification, clinical summaries not available in the right electronic format, etc.). What “apps” that are available would be specific to a single constellation or planet (calculation of cardiovascular risk separate from cancer risk, which in turn would be assessed separately from dementia risk, etc.; non-interoperable electronic medical records), but pointedly not of the whole of the night sky, leaving any effort at holistic integration solely up to the user. And this integration would be made more difficult by the non-standardized use of probabilistic language to quantify likelihoods of, e.g., this constellation being Leo and not Libra (here the medical equivalent would be different risks in different time frames under different conditions on the basis of different evidence bases).
To do its job more effectively and with greater efficiency, health care needs a system not of mechanical agents organized to deliver patient-centric “personalized medicine,” but rather a system that uses mechanical intelligence to personalize—for the health care provider—the acquisition, interpretation, and real-time delivery of material relevant to the care of that patient. In short, AI has been pursuing the wrong “person”—it needs to attend to the needs of the person or people providing care first, so that they can successfully personalize the medical care of the individuals in need second. Practitioners know well the “fog of care,” in which the only thing that is “personalized” typically is a stack of paper charts that require a night of careful review and distillation by an overtired intern. Mechanical, automated methods that can learn (and guide) what each physician or other care provider needs to best do the job for which they have trained, that can then seek and grab all of the relevant data for a case, that can integrate and present that information in a consistent and systematic manner—this would be a truly labor-saving and helpful intervention in modern medical care delivery that would get out of the business of trying to “out-intelligence” those trained to health and would focus instead on helping these intelligent people do their jobs better.