A recent article in ‘Wired’, entitled ‘Silicon Valley’s Secret Philosophers Should Share Their Work’, by Alexis Papazoglou, stirred my interest.

I was as curious as the author to learn how so-called ‘tech giants’ embed philosophers and ethicists into their programmes – how they are recruited, their experience or skill sets and crucially, what they do and their outputs to date.

Unfortunately, the author concluded that at least in his case, there was simply insufficient data to offer the degree of transparency hinted at in the article.  Like the author, I too remained somewhat in the dark.

Yet in many respects, the article should stimulate further discourse on this subject.  How does philosophy or ethics sit alongside the development of artificial intelligence?

Superficially, some pointers seem obvious.  How often, for example, do you read or hear commentary on the ‘trolley problem’ alongside the question of driverless cars? Obviously the philosopher most closely associated with the original ethical ‘trolley problem’, Phillipa Foot, never considered driverless cars but nevertheless, her ethical thinking is as alive today as when she wrote her work.

Similarly, consider the thoughts of Nick Bostrom at Oxford University, as he speculated on the so-called ‘paperclip theory’ or that life is possibly a computer simulation along the lines of the cult movie ‘The Matrix’. Both positions have been discussed in great detail and indeed over time, positions have changed on many of the original assumptions.  Yet the work of Professor Bostrom is a stimulant to the utility of philosophy in AI design considerations.

Or one might be drawn to the practical implications behind the numerous reports into the implications behind the unregulated use of synthetic biology in human experimentation– a recent case in point being the experimental work of Chinese medical researcher, Dr He Jiankui.  Where does the ethics of medical AI start and finish?

Between the philosophical and ethical ponderings of the philosopher and ethicist and the AI product, there must be a process of ‘Socratic dialogue’ between philosopher and designer or just as likely, between philosopher and financial sponsor.  Yet how would such a conversation play out?  What does happen when an ethical concern meets a corporate ‘bottom line’?  

Take for example an ethicist interacting with a weapons designer – the brains behind a lethal autonomous weapon.  In all likelihood, should an ethicist be embedded in a weapon design process – and that is far from certain – is he or she ever likely to discuss a weapon in isolation.  It is more likely they will discuss a weapon system, a family of weapons, evolution of performance or purely technical concepts such as adaptability, upgrading or redundancy.  I am not suggesting that the ethicist has no role – far from it – but the process itself does not lend itself to clean cut points of ethical departure and end states absent the context of application.

I suppose the point I am getting at is not that philosophers should be seen and not heard in Silicon Valley but rather we need a debate on how best to use philosophers in relation to AI development without them being absorbed in corporate identification or endless non disclosure agreements.

Perhaps the answer lies in looking at philosophy and philosophers as a menu of possible contributions to AI debate as opposed to offsets or insurance policies should future AI products and services have unintended or negative consequences.  Surely Wittgenstein might have something to say on the importance and relevance of language, which might stimulate a programmer?  Might the work of Hubert Dreyfus on what computers still can’t do be worth a look?  Is there nothing to learn from Descartes on dualism and mind or Bertrand Russell on logic and mathematics that could not provide a catalyst to the philosophical boundaries of AI?  Similarly, might we not find a use for the thinking of John Searle or Daniel Dennett or Gilbert Ryle?

I for one would look forward to learning more about what philosophers get up to with ‘big tech’ but somehow I don’t see it happening any time soon.  Maybe we need to find a different way to influence the debate on AI through a different approach from philosophy.

print