An ethical question with a concrete answer

Ethics questions often give a scenario (often unrealistic and missing key details) where you have to choose between a set of difficult options. For example, it could be “an old lady who has no family or a young man with a promising career and loved by a lot of people are both admitted to the ER. Who would you prioritize on treating if their probability of survival is the same?”

However, sometimes the answer is quite a bit more straightforward. The question I saw is the following: Suppose that an intelligent machine has the ability to predict, with 99.99% accuracy, whether someone will commit murder. Would it be permissible to arrest people based on the predictions of the machine?

There seems to be no ‘correct’ way to answer this question, but that’s because the person who asked the question doesn’t understand statistics. There is a phenomenon in statistics called the false positive paradox, where even a very accurate test will produce many more false positives than actual positives. This is relevant when the actual probability of a true positive is very low.

Here is an example. You are a unique person, one of say 7 billion. Suppose that there is a machine that can identify someone with 99.9999% accuracy. A person gets scanned by the machine, and the machine says it’s you. What is the probability that the machine is right?

There are two possibilities for the machine to give that reading. Either the person scanned by the machine actually is you and the machine is accurate, or the person scanned by the machine is not you and the machine malfunctioned. The probability that the machine is accurate is 99.9999%, and the probability that the person scanned is you is one in 7 billion. The other possibility is that the machine is wrong and the person being scanned is not you. The probability that the machine is wrong is 0.0001%, and the probability that the person is not you is 6,999,999,999 out of 7 billion. Thus, the probability that the person actually is you, given that the machine says it scanned you, is given by the equation

\dfrac{\frac{999999}{1000000} \times \frac{1}{7000000000}}{\frac{999999}{7000000000} \times \frac{1}{7000000000} + \frac{1}{1000000} \times \frac{6999999999}{7000000000}}.

Evaluating, the probability that the machine is right is less than 0.015%. The paradox is caused by the fact that the machine’s accuracy is very poor compared to the astronomically unlikely phenomenon that you would be picked out of 7 billion people.

Now back to the question. Whether it is ethical or not (that is, whether the machine produces a desirable result an acceptable proportion of the time) will depend on the actual murder rate of a place. The global average is currently 6.2 people out of every 100,000, or 31 out of every half a million. Assuming the machine has accuracy 99.99%, the probability that a person identified by the machine as a murderer is actually a murderer is given by

\dfrac{\frac{9999}{10000} \times \frac{31}{500000}}{\frac{9999}{10000} \times \frac{31}{500000} + \frac{1}{10000} \times \frac{499969}{500000}}

or roughly 38.3%. Therefore, the machine is right way less than half the time, far below what can be considered reasonable. Therefore this question has a concrete answer and is not really debatable.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s