What's an acceptable number of failures?


During my (brief) stint teaching senior leaders about AI, there was one question that I urged them to learn above all others.

  • What is the acceptable failure rate?

For this, I had to teach them two concepts.

  1. False Positives. For example, telling someone they have cancer when they don't.
  2. False Negatives. For example, telling someone they don't have cancer when they do.

There is a cost associated with both of these errors. In the first case, it is the monetary cost of unnecessary treatment and the emotional cost to the patient. In the second, it is the monetary and reputational cost of getting sued for negligence as well as the emotional cost to the patient.

Here's a handy chart:

True False
Positive 😃 😭
Negative 😃 😭

So, we're agreed! Let's eliminate false results. Luckily, that's pretty easy.

If we want to eliminate all False Positives, we just give everything a negative result.

If we want to eliminate all False Negatives, we just give everything a positive result.

Ah. Do you see the problem? We want both to be zero. But humans - and the processes they create - are fallible. There will always be an error somewhere.

So how much error is acceptable?

You may have heard of the maxim "It is better that ten guilty persons escape than that one innocent suffer." - known as Blackstone's ratio. That suggests that a high False Negative rate is an acceptable consequence of the legal system - because a False Positive is unconscionable0.

The problem of False results increases with the total number of results. Let's take, for example, spam email. If you are manually looking over every email you receive - say 10 per day1 - and judging whether they are spam or not, then it's relatively easy to get zero false results.

Now imagine that you're sorting through a million emails. Is that one about Viagra spam - or is it a genuine message from your pharmacy? That one is from your mum - but it's a forwarded chain hoax - is that spam? And so on.

What's the right balance between False Positive and False Negative? Would you rather have genuine email fall into the spam trap, or occasionally see spam in your inbox?

Now apply this thinking to any large-scale process.

For example, responding to abuse requests on social media. I'm sure we've all had the experience of finding blatantly illegal or abusive content, reporting it, and getting back the message "Sorry, we don't think this violates our guidelines."

It's annoying and distressing to encounter a False Negative in the wild.

And, I'm sure many of us will have experienced being notified that our completely innocuous post somehow "went against community guidelines" and was blocked.

It's annoying and distressing to encounter a False Positive in the wild.

But the most painful thing is that we have no way of judging is how many True Positives and Negatives there were.

The Trilemma

The classic choice we are asked to make is:

  1. Fast
  2. Cheap
  3. Reliable
  • Pick any two

If we moderated our own content, it would be reliable and cheap - but it wouldn't be fast.

Things like Bayesian Filters are automated tools which compare messages to a list of known "good" and "bad" messages. They're very quick, very cheap, but their reliability is poor because spammers learn how to bypass them.

So we tend to do one of two things:

  1. Outsourced labour
  2. AI

Using real humans in a low-cost area is cheap. Get enough of them and they are fast. But the more you get, the more expensive it is. And the reliability is uneven.

So AI comes to the rescue. It is blazing fast. The more you use it, the cheaper it gets. And the more you correct it, the more reliable it gets.

But that reliability always comes back to the acceptable number of False Positives vs False Negatives.

I've seen some people say that social networks should have zero-tolerance for abuse. I agree with that. I'd happily ban all content that is racist, sexist, homophobic, transphobic, casteist etc2.

But what's an acceptable failure rate? With a million abusive messages coming in every minute, how many is acceptable number to incorrectly mark as benign?

If you say zero, then you have to accept an increase in False Positives - innocent messages getting incorrectly blocked.

The reverse is also true.

And I think that's where people get stuck. We have to accept that failure is an option. More than that, it is an inevitability. We have to make peace with the fact that False Positives and False Negatives can only ever be reduced - not entirely eliminated.


  1. Other people take a different view: "I’m more concerned with bad guys who got out and released than I am with a few that in fact were innocent.↩︎

  2. I wish I got that few! ↩︎

  3. Of course, an AI is only as good as the material it is trained on. Many abuse detectors fail because they are trained on biased data sets. See Algorithms of Oppression by Safiya Noble and Ruha Benjamin's Race After Technology: Abolitionist Tools for the New Jim Code for more. ↩︎


Share this post on…

  • Mastodon
  • Facebook
  • LinkedIn
  • BlueSky
  • Threads
  • Reddit
  • HackerNews
  • Lobsters
  • WhatsApp
  • Telegram

4 thoughts on “What's an acceptable number of failures?”

  1. said on infosec.exchange:

    @Edent speaking of moderation I recall a few people being banned from some social media for allegedly offensive posts. They were being sarcastic or a couple fighting eachother (for fun). People got banned, for good. Each account was 10k followers each, they had an engaged follower base, awesome people (I met some of them live). Nothing to be done, they got banned because the message wasn't understood.Cultural differences, emotions, topics. The very same sentence can change drastically based on the above. False positives can spike in such situations. So how do you appeal if you have a false positive? The bigger the community, the lower are the chances of a successful appealing program - IMHO, which again, could become the next big thing after first strikes.

    Reply | Reply to original comment on infosec.exchange
  2. said on mastodon.online:

    @Edent A good reminder of the challenge with any ML model. Though these things often exist as parts of larger systems which can offer a way forward.For example I think the Trillema can be solved in cancer detection by chaining two models. You first have a fast and cheap filter with low false negatives then another reliable one with low false positives, potentially involving a human doctor looking at the results.

    Reply | Reply to original comment on mastodon.online

What are your reckons?

All comments are moderated and may not be published immediately. Your email address will not be published.

Allowed HTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <p> <pre> <br> <img src="" alt="" title="" srcset="">