AI ethics
November 16, 2018

AI ethics (1) Machine Ethics and Artificial Moral Agents

By Francesco Corea

Time for us to consider AI ethics, not just AI developments.

Continuing our focus on themes that emerged at recent Data Leaders Summit, let us consider the AI ethics problems posed by this developing capability.

This builds upon our recent post about the emergence of Product Manager roles within Data Science teams. Another key theme at that event, was the need for Data & AI Ethics. A debate on this at the close of the summit could have run for much longer.

So, given the interest & passion this topic can provoke, I am delighted to welcome back guest blogger Francesco Corea to share his thoughts. Francesco is an experienced AI blogger, who has attended many AI events & worked in this sector for many years. So, he can speak from an informed position.

Over to Francesco to help us think through the ethical considerations of our AI innovation plans…

Machine Ethics and Artificial Moral Agents

There has been a lot of talk over the past years about AI being our best or worst invention ever. The chance of robots taking over and the following catastrophic sci-fi scenario makes the ethical and purposeful design of machines and algorithms not simply important but necessary.

But the problems do not end here. Incorporating ethical principles into our technology development process should not just be a way to prevent human race extinction, but also a way to understand how to use the power coming from that technology responsibly.

This article does not want to be a guide for ethics for AI or setting the guidelines for building ethical technologies. It is simply a stream of consciousness on questions and problems I have been thinking and asking myself, and hopefully, it will stimulate some discussion.

One of the most relevant topics when it comes to machines’ ethics is whether we could actually trust our algorithms or not. Let me give you a different perspective to practically looking at this problem.

Machine Ethics – a medical scenario

Let’s assume you are a medical doctor and you use one of the many algorithms out there to help you diagnose a specific disease or to assist you in a patient treatment. In the 99.99% of the time the computer gets it right — and it never gets tired, it analyzed billions of records, it sees patterns that a human eye can’t perceive, we all know this story, right? But what if in the remaining 0.01% of the case, your instinct tells you something opposite to the machine result and you end up being right? What if you decide to follow the advice the machine spit out instead of yours and the patient dies? Who is liable in this case?

But even worse: let’s say in that case you follow your gut feeling (we know is not gut feeling though, but simply your ability to recognize at a glance something you know to be the right disease or treatment) and you save a patient. The following time (and patient), you have another conflict with the machine results but strong of the recent past experience (because of an hot-hand fallacy or an overconfidence bias) you think to be right again and decide to disregard what the artificial engine tells you. Then the patient dies. Who is liable now?

The question is quite delicate indeed and the scenarios in my head are:

  1. scenario where the doctor is only human with no machine assistance. The payoff here is that liability stay with him, he gets it right 70% of the time, but the things are quite clear and sometimes he gets right something extremely hard (the lucky guy out of 10,000 patients);
  2. scenario where a machine decides and gets it right 99.99% of the time. The negative side of it is an unfortunate patient out of 10,000 is going to die because of a machine error and the liability is not assigned to either the machine or the human;
  3. scenario the doctor is assisted but has the final call to decide whether to follow the advice. The payoff here is completely randomized and not clear to me at all.

Machine ethics – an economics perspective

As a former economist, I have been trained to be heartless and reason in terms of expected values and big numbers (basically a Utilitarian), therefore scenario b) looks the only possible to me because it saves the greatest number of people. But we all know is not that simple (and of course doesn’t feel right for the unlucky guy of our example): think about the case, for instance, of autonomous vehicles that lose controls and need to decide if killing the driver or five random pedestrians (the famous Trolley Problem). Based on that principles I’d save the pedestrians, right? But what if all those five are criminals and the driver is a pregnant woman? Does your judgement change in that case? And again, what if the vehicle could instantly use cameras and visual sensors to recognize pedestrians’ faces, connect to a central database and match them with health records finding out that they all have some type of terminal disease? You see, the line is blurring…

The final doubt that remains is then not simply about liability (and the choice between pure outcomes and ways to achieve them) but rather on trusting the algorithm (and I know that for someone who studied 12 years to become doctor might not be that easy to give that up). In fact, algorithm aversion is becoming a real problem for algorithms-assisted tasks. It looks that people want to have an (even if incredibly small) degree of control over algorithms (Dietvorst et al., 2015, 2016).

Machine ethics – the alignment problem

But above all: are we allowed to deviate from the advice we get from accurate algorithms? And if so, in what circumstances and to what extent?

Are we allowed to deviate from the advice we get from accurate algorithms?

If an AI would decide on the matter, it will also probably go for scenario b) but we as humans would like to find a compromise between those scenarios because we ‘ethically’ don’t feel any of those to be right. We can rephrase then this issue under the ‘alignment problem’ lens, which means that the goals and behaviors an AI have need to be aligned with human values — an AI needs to think as a human in certain cases (but of course the question here is how do you discriminate? And what’s the advantage of having an AI then? Let’s therefore simply stick to the traditional human activities).

In this situation, the work done by the Future of Life Institute with the Asilomar Principles becomes extremely relevant.

Machine Ethics – what’s your experience?

Thanks for Francesco for that. Part two coming soon.

In the meantime, I wonder what you have experienced as ethical issues when developing machine learning algorithms. Do you have any challenges or suggestions to share?

Conversations with Data Science leaders at recent summits suggest this is an area worth of further discussion. Indeed, some business have gone so far as to appoint roles to focus on Data or AI ethics.

Perhaps this will be a future specialism within our community. Time will tell, but the focus to take these issues seriously is welcome.