Early access registrations are open for Headstart, a predictable, fixed-price program for becoming compliant.

Updated November 27, 2020

FMEA, Part 1: Risk Acceptance Matrix (ISO 14971 Risk Analysis)

Dr. Oliver Eidel
ISO 14971

The ISO 14971 requires you to do some sort of risk analysis. Typically, you’ll do a Failure Mode and Effects Analysis. If you’ve never done that before, it’s a somewhat vague task in the beginning. I certainly have lost lots of weeks trying to figure it out. I’ll walk you through how to do it, so lets hope that you only lose a few hours.

Generally speaking, in any risk analysis, your goal is to determine in which way your software could malfunction and what sort of real-world consequences can subsequently be expected.

But before you dive into what could go wrong, you have to determine what sort of harm you think is acceptable. That’s called defining a risk acceptance matrix.

In the meantime, I’ve uploaded a free risk acceptance matrix template for you to create your own. But I’d recommend reading this article first before diving into that.

What the Hell Is a Risk Acceptance Matrix?

Let’s go through a simplified example first. Let’s say you’ve developed software which controls a laser beam used for eye surgery - you know, that sort of laser with which you can correct eye sight.

Let’s just assume, for the sake of this example, that it has some issues. Quite often, patients feel pain during the procedure. But that pain is only temporary and doesn’t influence the outcome of the operation. There’s another “issues” though: In rare cases, the laser backfires and kills the surgeon (remember, this is just an example).

Before we start thinking about whether this is acceptable (is it?), I’d like to show you a way to structure this sort of information: A risk matrix.

In this case, our risk matrix would look like this:

  Patient has temporary pain Surgeon gets killed
Often x  
Rarely   x

Can you follow? This is just another way of displaying / structuring the situation. The two x-marks specify those two parts of information: “Often, patients experience temporary pain” and “rarely, the surgeon gets killed”.

Okay. Next question: Is this acceptable? For the sake of this example, let’s say that “rarely killing a surgeon” is not acceptable, while “often causing temporary pain to patients” may be okay. Let’s update your risk matrix, which by the way now is a risk acceptance matrix:

  Patient has temporary pain Surgeon gets killed
Often Acceptable  
Rarely   Unacceptable

This is interesting, because looking at this table, we might ask the question: What about the empty cells? Yeah, right. What about those? The empty cells are: “Rarely, the patient has temporary pain” and “Often, the surgeon gets killed”. Are those acceptable? I’d argue that killing the surgeon during an operation generally is not acceptable, regardless of whether it happens rarely or often. And causing patients temporary pain - well, we thought it’d be acceptable if it happens “often”, so it’s also acceptable if it happens “rarely”.

The final risk acceptance matrix now looks like this:

  Patient has temporary pain Surgeon gets killed
Often Acceptable Unacceptable
Rarely Acceptable Unacceptable

And that’s all there is about risk acceptance matrices. On one axis, you have the probabilities (often / rarely), on the other one, you have the severity of harms (temporary pain / surgeon killed). And the cells contain the values whether each combination of the two is acceptable for you as a company.

Great! But, now, to make this more formal so that an auditor will actually take it seriously, you need to introduce a slightly more methodological approach. I see at least three problems with the risk acceptance matrix above:

  1. It only covers two harms: Patient pain and surgeon death.
  2. What’s the definition of “often” and “rarely”?
  3. It doesn’t say anything about absolute numbers: How many patients experience pain? How many surgeons are killed?

In our next iteration, we’ll see how we can fix those.

Adding Severity Categories to the Risk Acceptance Matrix

Let’s fix the first problem first: We’re only covering two harms, patient pain and surgeon death. If we wanted to cover more harms, we’d have to add new columns to our matrix and things would be messy very soon.

How can we solve this instead? Let’s group harms into categories. For example, we could separate reversible from irreversible damage, and we could add death as separate category:

Severity Category Reversible Damage Irreversible Damage Death Example
S1 x     Temporary Pain
S2   x   Blindness
S3     x Physician Death

Okay, there’s a lot of information here.

First, I created three distinct severity categories: S1, S2 and S3 (feel free to name them however you want). Each severity category has different characteristics (reversible / irreversible / death) and represents different harms, see the examples in the rightmost column. Temporary pain would be an example of S1, while physician death now is S3.

I also added another severity category in between, S2. That would be reversible damage, but not death - blindness could be a typical example if we’re talking about an eye-surgery laser device.

I hope you can still follow. Let’s apply these newfound severity categories to our risk acceptance matrix and see how it looks:

  S1 S2 S3
Often Acceptable ? Unacceptable
Rarely Acceptable ? Unacceptable

What did I do? I only replaced “patient has temporary pain” with S1, and “surgeon gets killed” with S3. We also already had determined that S1 is always acceptable while S3 never is. Next, I added the column S2 in between. So far, we haven’t put any thought into whether S2 is acceptable - is causing blindness acceptable? Depends how often. Is it acceptable if it happens rarely? And if it happens often?

Well, I don’t know, man. I’ll leave that as an exercise to you. Let’s move on and solve some more regulatory documentation problems first. We still need to define what “often” and “rarely” really mean.

Adding Probability Categories to the Risk Acceptance Matrix

How often is often? How rare is rare? At some point in history, humans started using numbers to describe probabilities. Let’s apply this knowledge to our regulatory documentation.

A quick maths refresher (yes, seriously): A probability of 1 is 100%, while 0.1 is 10%, and so on.

Let’s say “often” is 0.1, while “rarely” is 0.001.

If we’d create a table to describe our probability categories, it may look like this:

Probability Category Probability
Often 0.1
Rarely 0.001

Okay. But what happens now if an event has a probability of 1? That’s more than “often”. But “often” is defined as 0.1. Great. So I suppose a probability category shouldn’t be defined as a distinct number, but rather as a range. So anything with a probability from 1 to 0.01 could be categorized as “often”, similarly for “rarely”. Let’s see:

Probability Category Probability max Probability min
Often 1 0.01
Rarely 0.01 0.0001

Great! Next problem: What if something is less probably than the lowest value for “rarely”, so lower than 0.0001? Looks like we should have a probability category which as zero as lowest limit. Let’s add that:

Probability Category Probability max Probability min
Often 1 0.01
Rarely 0.01 0.0001
Very rarely or never 0.0001 0

For lack of better words, I added “very rarely or never” as category which ranges from 0.0001 to 0. So it includes all events with probabilities lower than 0.0001.

Now, “very rarely or never” sounds really awkward. Let’s make our naming of our probability categories more systematic, similar like S1, S2, etc. for severity categories, let’s use P1, P2, etc.:

Probability Category Probability max Probability min
P1 1 0.01
P2 0.01 0.0001
P3 0.0001 0

Okay, great, almost done! One final question: We still don’t know anything about absolute numbers. If something happens “rarely” (P2), i.e. with a probability between 0.01 and 0.0001, how often does it actually happen in the real world? For that, we need to know how often our medical device is going to be used.

We could take the number of users and multiply that with usages of our device per day:

Column Description Value
A User count 50
B Usages per user per day 5
C Total usages per year (A * B * 365) 91250

We assume we have 50 users - let’s say that’s the number of surgeons using our laser device right now (column A). Each surgeon does five operations with it per day (column B). Given that a year has 365 (well, technically, 365.25), we multiple the surgeon count (50) with the usages per day (50 * 5 = 250) and multiply that number with the number of days per year (250 * 365) to arrive at the total usages per year of 91250. Makes sense?

By the way, if your device is not on the market yet, you’ll have to estimate these (you have customers lined up, right?) and update them later if they change.

Okay, so we have the usage count per year. Now we can use that to add absolute numbers to our probability categories:

Probability Category Probability max Probability min Events/year max
P1 1 0.01 91250
P2 0.01 0.0001 913
P3 0.0001 0 10

Note that I rounded up the numbers. The new column events/year max simply multiplies the probability max column with the absolute usage count per year, which is 91250in our case. Can you follow? I hope you can. Otherwise send me an email and I’ll try to improve this.

Okay! Finally, we can update our risk acceptance matrix with our newly-defined probability categories:

Putting Severity and Probability Categories Together

I added examples (e.g. pain) and our old descriptions of probability categories (e.g. often) to the table to make it more understandable.

  S1 (e.g. pain) S2 (e.g. blindness) S3 (death)
P1 (often) Acceptable ? Unacceptable
P2 (rarely) Acceptable ? Unacceptable
P3 (very rarely) Acceptable ? Unacceptable

Great. Now we also have enough data to fill out the remaining cells. The question is: Is a harm with severity S2, e.g. blindness, acceptable? Looking into the absolute numbers for our probability categories, we’d be causing blindness 91250 times per year (P1), or 913 times (P2), or 10 times (P3).

So, what do you think? Is causing blindness okay when causing it on 10 patients per year (of 91250 in total)? As always, the answer is: It depends. I’d argue that for this sort of device it’s unacceptable.

What’s the context of laser eye surgery? In this example, it’s the correction of short- (or far-) sightedness. Is it an optional treatment? Hell yes. Patients can just wear glasses or contact lenses instead. So causing blindness on patients sounds unacceptable, given that the patient could just as well walk out of the hospital and wear glasses instead.

Things could be very different for a different device. Let’s consider a defibrillator instead - that device from the movies whose two metal paddles are applied to patients with cardiac arrest before sending an electric current through them. Typical patients for this device have just suffered a cardiac arrest and using this device may reset their cardiac rhythm and “heal” them instantly.

How about the alternative of not using this device? Well, in all likelihood, the patient would die. Now, imagine that this device causes blindness in 1% of applications. Doesn’t sound like a bad deal - better be alive and blind than dead. Sure, this is a strongly simplified example, but I hope it illustrates the point: It depends on the context of your device and its alternatives.

(You’re probably wondering where you document these things: In the clinical evaluation.)

Alright, that’s it! You’re hopefully capable of defining your own risk matrix now. Here’s my template for you to download.

After you’ve done that, the next step is to actually fill out a risk table. Like it or not, the main chunk of work of doing an FMEA is still ahead of you :)

Congratulations! You read this far.

Get notified when I post something new.

Sign up for my free newsletter.

I work as a regulatory consultant for Healthcare software startups. I try to publish all my knowledge here so that startups can certify their medical devices themselves in the future.

If you're still lost and have further questions, just send me an email and I'll be happy to answer them for free. More about me..

No Cookie For You Privacy Policy Imprint
No QMS on this planet will save you from creating crappy software.