So you want to learn about Software (or Hardware) Test Runs. So the first distinction here is that these are test runs, not the actual tests. A quick table!
Software Test | Software Test Run | |
Content | Defines the actual test with setup instructions, steps and expected results | Describes an actual execution of the test on a certain date with certain results. Also, were these results the expected results and did the test pass? |
Exists how often in a Techdoc product | Once. A specific Software Test is only defined once (in a product version in Formwork). | Can exist multiple times, because you might “execute” a Software Test multiple times, leading to multiple Software Test Runs |
Requires Review? | Yes. Your list of Software Tests should be reviewed (and thereby released) before you start executing them. | No. Just run your tests by creating Software Test Runs. |
So all of this might sound very theoretical. Let’s just create a Software Test Run and then you’ll see what I mean. It’s very intuitive (famous last words).
First off, navigate to “Products” in the top menu of Formwork and then to “Software Test Runs” in the left menu. Note that, for a hardware device, they might be referred to as “System Test Runs”. Here’s what you see:
You can see that one Software Test Run was already created here. Cool. Note that, even if you’ve created some Software Tests already, your list of Software Test Runs is empty. You’ll see why very soon.
Let’s create a new Software Test Run:
So this is very simple. Here’s what you can enter:
- Internal ID: Internal ID of the test run, assigned automatically.
- Title: Describe what you tested, e.g. “Test run before first release” or so.
- Description: A more in-depth description of e.g. your test setup. Which hardware, software and configuration did you use etc.
- Software Tests: This is the most important part. Here, you select the Software Tests which you actually plan to execute next. Think of it like a restaurant menu: You’ve defined the menu items already (as Software Tests), but now you’re choosing which to “cook” (= run).
If this is a major testing session, you might select all of your tests; or you might just select a few tests if e.g. you want to re-run a specific set of tests due to some recent failures or so.
Cool. Once you’ve created a Software Test Run, here’s how it looks:
This is super important to understand, because there’s quite a bit of information here. First off, you can see that this Software Test Run contains four tests (colorful line at the top): 3 are still planned, and one has been executed successfully (“success”). We’ll see what that means next.
If you scroll further down, you can see the first Software Test which is part of this Software Test Run (SWT-1). It has the status “Success”. You can see the instructions, steps and expected results below.
Now! The main goal of a Software Test Run is to provide you with a list of Software Tests which you should run, so that you can record their results. And as we’ve seen, we are at the top of this list and currently looking at the first Software Test, SWT-1.
So now we can actually click on this test to enter results. Here’s how that looks:
A magic grey box appears. I guess you get it now, but I’ll still explain it: A lot of the content is taken from the Software Test which you defined in the Software Tests section a while ago: Test instructions, steps and expected results. Your job now, in the Software Test Runs, is to “run” this test by actually performing the steps in your medical device software, and then write down the outcome in the “results” section. Finally, you select a result of this Test Run in the dropdown above – as you can see in the screenshot, here we’ve selected “Success” (nice!). And then you hit “Save”, and you’re done!
Then you scroll down to the next test which could be SWT-2, and perform the same thing, and so on.
So, again, a Software Test Run is kind of like a “protocol” for you: It outlines which Software Tests you should go through, shows you the setup instructions, steps and expected results, and you write down the actual results and outcome of each test. Easy.
So that’s testing. Wasn’t that simple?
I guess you only appreciate the full simplicity of this workflow if you’ve ever done this sort of test documentation in Google Sheets (ugh), a GitHub markdown repo (good luck with the tables) or Jira (no comment). Formwork is way faster.