Skip to content
Technology & Innovation

Can a machine be ethical? Why teaching AI ethics is a minefield.

Artificial intelligence will soon be powerful enough to operate autonomously, how should we tell it to act? What kind of ethics should we teach it?
The HAL 9000 computer as seen in 2001: A Space Odyssey.
Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

We are rapidly approaching the day when an autonomous artificial intelligence may have to make ethical decisions of great magnitude without human supervision. The question that we must answer is how it should act when life is on the line.


Helping us make our decision is philosopher James H. Moor, one of the first philosophers to make significant inroads into computer ethics. In his 2009 essay Four Kinds of Ethical Robots, he examines the possible ethical responsibilities machines could have and how we ought to think about it.

Dr. Moor categorizes machines of all kinds into four ethical groups. Each group has different ethical abilities that we need to account for when designing and responding to them.

Ethical impact agents 

These are devices like watches which could have a positive or negative impact on humans. While a watch is unable to do anything but tell me what time it is, the timepiece could be wrong and therefore cause me to be late.

Implicit ethical agents

These are machines like ATMs that have certain ethical concerns addressed in their very design. ATMs, for example, have safeguards to assure they give out the proper amount of money and are just to both you and the bank.  

Other machines can be implicitly vicious, such as a torture device which is designed to assure maximum pain and is failsafe against comfort. While these machines have distinct ethical features, they are part of the machine’s very being; and not the result of a decision process.

Explicit ethical agents

These are closer to what most of us think of when we think of programmable robots and artificial intelligence. These devices and machines can be “thought of as acting from ethics, not merely according to ethics.”

To use the example of an ATM again, while an ATM has to check your balance before you run off with all of the bank’s money, it doesn’t decide to do that because the programmer gave it an ethical code, it was explicitly told to check.

An explicit ethical agent would be an ATM which was told to always prevent theft and then decided to check your balance before giving you the one million dollars you asked it for so it might reach that end.

Full ethical agents

These are beings which function just like us, including free will and a sense of self. An entirely moral being, biological or not. 

It is safe to say that no machine currently qualifies for this designation, and the bulk of academic and popular discussion focuses on explicit ethical agents. The idea of an entirely ethical device is a fascinating one, however, which is found in works such as 2001: A Space Odyssey.

michael-vassar-ai-will-bring-on-human-exctinction

So, if we have to worry about explicit agents, how should we tell them to act?

A major issue for computer ethics is what kind of algorithms an explicit ethical agent should follow. While many science fiction authors, philosophers, and futurists have proposed sets of rules before, many of them are lacking.

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them. 

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains.  

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

However, we might get what we ask for. The AI might decide to maximize human safety by making all risky technology it has access to stop dead. It could determine that human happiness is highest when all unhappy people are sent into lakes by self-driving cars.

How can we judge machines that make no choices? What would make an ethical machine a good one? 

This is a tricky one. While we do hold people who claim they were “just following orders” as responsible, we do so because we presume they had the free will to do otherwise. With A.I. we lack that ability. Dr. Moor does think we can still judge how well a machine is making a decision, however.

He says that: “In principle, we could gather evidence about a robot’s ethical competence just as we gather evidence about the competence of human decision-makers, by comparing its decisions with those of humans, or else by asking the robot to provide justifications for its decisions.”

While this wouldn’t cover all aspects of ethical decision making, it would be a strong start for a device that can only follow an algorithm.  This element isn’t all bad though, Dr. Moor is somewhat optimistic about the ability of such machines to make hard choices, as they might make difficult decisions “more competently and fairly than humans.”

As artificial intelligence gets smarter and our reliance on technology becomes more pronounced the need for a computer ethics becomes more pressing. If we can’t agree on how humans should act, how will we ever decide on how an intelligent machine should function? We should make up our minds quickly since the progress of AI shows no signs of slowing down. 

Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

Related

Up Next