Many years ago, the political theorist Hannah Arendt, wrote a compelling account of the impact of modernity on the human race entitled ‘The Human Condition: A study of the Central Dilemmas Facing Modern Man.” 

If nothing else, the book was prescient: it commented on the potentially negative impact of science and technology on both the individual and society and underlined the future problems associated with machine intelligence – then described as automation.  Indeed, Arendt dwelt on a central problem of a future man being unable to fully comprehend the language of mathematical computation, which would govern new technologies. Indeed, she saw a time in the not too distant future when the prevailing scientific world-view and truths will “no longer lend themselves to normal expression in speech and thought.”

That was 1958.  

Similar fears are re-emerging today and I have been struck by the increase in calls for an enhancement of ethical considerations in relationship to the general debate on artificial intelligence (AI).

Perhaps we shouldn’t be surprised when you consider the recent baby gene editing scandal involving Chinese doctor He Jiankui, who illegally and covertly abrogated any form of institutional or ethical control to push the limits of cutting edge life science.  

Perhaps it is no coincidence that a number of ‘Big Tech’ firms are beginning to explore the feasibility of creating in-house ethical panels with the mandate to both explore ethical issues arising from AI and to warn against potential misuse. If nothing else, this seems to be a recognition that AI is a ‘dual-use’ technology.

Yet welcome as this development is, there would appear to be little consensus about the conceptual and structural framework for embedding ethical considerations into AI development.  Some obvious considerations and questions come to mind, including whose ethics should be used when considering the future of AI, yours or mine?  Should our concept of virtue be based on the Christian-Judeo tradition, the Islamic tradition, any number of Asian frameworks or none?

We might also ask how one could possibly construct any ‘common good’ in the case of such ‘atomized’ societies that we live in today, where opinions and preferences have displaced truth and value?  

Perhaps one way to approach the issue of AI ethics is to determine just what it is that AI is doing to personhood.  Is machine intelligence shaping humanity?  I think the answer will lie in the investigation into what makes us human and to what purpose and end is AI shaping that humanity today.  It seems inescapable to conclude that the scientific and technological world-view – often perceived as a continual push to exit our human contextual limitations – is impinging on every facet of our life. Even a cursory glance at the impact of technology on communications and information technology reinforces the fear that the individual has been reshaped to accept the loss of privacy, personal data routinely extracted, analyzed, repackaged, monetized and sold.  Is this social development or technical slavery?

I would suggest that the best way to start thinking of ethics and AI is to start thinking of personhood and the concept of what it means to be human.  Every gene-editing project might not be ‘creation’ but it certainly slaps of playing ‘God’.  Every dilution if human agency might not be slavery but it does question our concepts of free will.  Fostering virtue might not be a popular, post-modern concept but it does touch on the question of what it means to be human.

Fostering ethical considerations in AI development is critical but I suggest, that we need to know why we need it, what it might look like and how it could be a factor in resisting the reshaping of humanity.  If we believe that AI has the potential to significantly modify us, then let’s see if we can control it before it is too late.

I think Hannah Arendt would agree.