The IEC 62304 requires you to do Software System Testing. What’s that? And how does it relate to software verification? According to that article, software verification already includes unit and integration tests? What the hell?
Software System Tests vs. Software Verification
It’s better to approach this from another angle. Your software system tests should cover all of your software requirements. In other words, for each software requirement, you should be able to point to a software system test and say “we covered it and the test passed, look!”.
Software verification on the other hand is done on the “pull request level” - you write some code and then you stuff like code review and run some CI tests before you merge it to master. That’s software verification in the 62304 sense.
So, for all software requirements for which you already had unit / integration tests, you’re covered. Just ensure that you run them again (and have proof for it!) prior to releasing a new public version of your medical device software.
But some of your software requirements may not be implemented in code. Let’s see how that’s possible.
Consider these two example software requirements:
- Firewall is set up and only ports 80 and 443 are open (security requirement).
- Data is stored in data centers located in Germany (data privacy / GDPR requirement).
Huh. There probably won’t be a pull request implementing those requirements. Instead, you’ve probably set those configuration values at your cloud provider (AWS, GCP, etc.). But you still have to prove that those are covered by tests. So what do you do now?
Easy. Regulation is all about having documentation which proves something. The actual technical implementation doesn’t have to be elegant (hence all the crappy medical software out there). In this case, we could just take screenshots of the configuration in the AWS / GCP cloud console. Yes, seriously. Of course, you could go ahead and write a test for this, port-scanning your server etc., but just checking the config value would be the more straightforward solution.
Human (Manual) Testing
And then there are the software requirements for which you don’t have any unit or integration tests. Typically, that’s the case for GUI-heavy applications. Or, if software requirements span across multiple software systems and those aren’t trivially testable (microservice chaos?). Or if your development team simply didn’t write any so far (your company problems may run even deeper then - too little time? Changing business strategies?).
In any case, your situation is not hopeless.. yet. You can still fall back to human (QA?) testing. In other words, you create a test protocol in which you list the steps which should be performed for each “test”. The tester has to document their results. And, like always, it’s good to have some proof, in this case, screenshots.
This will slow down your release cycle substantially for obvious reasons. So try to automate as much of your testing as possible.
But, in my experience, there always is some human testing involved. You’ll need developers to check configuration values (see prior section). And you’ll need testers to go through some GUI flows which aren’t covered by automatic tests. By the way, those testers can be people from your company, no need to sign an expensive testing lab.
Re-do All Tests After Minor Code Changes?
Now, what happens when your software is certified and released and you make a minor change to it. Do you have to re-do all tests? Read my answer for that question.
Putting it Together: Ensure Traceability
Finally, how to you document it? Like with all aspects of your technical documentation, traceability is very important here. You need to show which software requirements were covered by which tests and vice versa.
Let’s say you have these simplified software requirements (here’s the article on how to write them):
|On first launch, show introduction
|Login is only possible with valid email and password
|Data is stored in data centers located in Germany
These are interesting examples, because the “testing” strategy is different for each of them. The first one will be tested manually, the second one automatically and proof for the third one is a screenshot of the configuration.
I’ve uploaded a complete software system plan template, but for these
examples, let’s look at a simplified version. It could look like this;
SR ID refers to the tested software
|Introduction shown on first launch
|1. Install app
2. Launch app
|Only correct email/password combination can log in
|1. Execute automated tests
|Data is stored in data centers located in Germany
|1. Verify configuration
And then, feel free to add attachments as appropriate. For 1., you could add a screenshot, for 2. a log of the test execution and for 3. a screenshot from your cloud console. I suppose the screenshot would be most important for 3. as otherwise there isn’t any proof at all that you checked the configuration.
Which leads us to the question: Do you always need some sort of proof?
Do You Always Need Screenshots Or Proof?
I don’t know. At minimum, you need the test protocol (see above) with expected and actual results. It heavily depends on your auditor whether they believe that you actually performed the tests. If you include proof, you’re fine.
If you want to take make things lean, you could omit proof for automated tests - no need to copy-paste random console output into random documents. As long as your test results are stored somewhere in your CI system and you can retrieve it on-demand if your auditor asks you, you’re fine.
Conclusion: Have Proof and Traceability
If you think that all this sounds complex, don’t worry, it’s not. Here are the key takeaways:
- Have proof (test plan + protocol) that all your software requirements are covered by tests.
- You need traceability to show which test covered which requirement.
That’s it. Everything else is about implementation.