Don't know where to start? Watch our free starter videos and save lots of time and consultant fees

Articles Technical Documentation

Updated May 24, 2023

Clinical Evaluation: How to Write a Regulatory Compliant Literature Review

Nicoleta Spînu

A clinical evaluation is required for all medical devices according to the MDR. The main task of a clinical evaluation is to identify pertinent data in relation to your software device and similar ones. This will help you to prove the intended use of your software device and the clinical claims, that your software device is safe, presents no risks or the benefits outweigh the risks, and ideally, the benefits outperform any existing devices, e.g., by comparing your software device with the existing ones (if any) or any studies from peer-reviewed publications, medical guidelines and reports. This is done through a systematic literature review and thus, better formalising an efficient process to avoid any potential pitfalls including notified body non-conformance.

If you want a brief overview of the Clinical Evaluation Report, check out our article Clinical Evaluation Report (CER) For Medical Devices: 3 Easy Steps. To understand what the notified bodies are looking for, our template Literature Evaluation Checklist summarises that. The common points of failure that we have heard about seem to be incomplete search coverage, incomplete audit trial, and data integrity/data errors.

A pragmatic literature review approach implies a simplified, repeatable, reproducible, transparent, reusable process. Below are some practical considerations to help you conduct a high-quality literature review to produce quality data output for your Clinical Evaluation Report.

Optimise your search terms strategy as early as possible

Unfortunately, you will be able to come up with a strategy for the terms to be used to screen databases through trial and error. Our approach starts by defining all possible MeSH terms that describe the intended use of your medical device and the intervention/therapy your medical device aims for. We then check for the most recent publications, e.g., a recently published systematic review and/or a meta-analysis, and write down the MeSH terms those authors used. We then define a few search queries and refine them if needed in order to cover the appraisal criteria and/or target systematic reviews and randomized controlled trials.

Most importantly, you will have to describe explicitly your search terms strategy, i.e., how you identified the relevant publications. If someone reads the Section Literature Search Protocol and follows it, the same list(s) of publications should be retrieved.

If you have the fear of not missing references or are willing to explore new fancy ways of deriving existing knowledge and have resources for that, you might want to consider natural language processing (models developed by Hugging Face, OpenAI) or network-derived tools (Inciteful, Open Knowledge Maps, to name a few). You might also want to consider checking the references for any similar medical device on the market.

Always capture reasons for the inclusion and exclusion of your literature

There is a big chance that auditors ask you why you excluded a certain study. Writing the inclusion and exclusion criteria in your Literature Search Protocol will ease not only the argumentation of the choices made but also, the evaluation process itself.

Some examples of inclusion criteria we follow:

Some examples of exclusion criteria we follow:

Ensure using multiple data sources

Pulling data from multiple sources gives your report credibility in that all evidence was identified. On the other hand, this leads to additional work, mainly making sure of excluding duplications of references. One way we deduplicate our lists is by retrieving the PMIDs from the PubMed of all the searches. PubMed allows you to download the lists as a CSV file. We then use either a Python workflow or any other tools such as KNIME to identify and delete the duplicates. You can do this as well in Google Sheets by checking for conditional formatting based on any other variable and not necessarily the PMIDs, e.g., titles, list of authors etc.

Document all the references

You have to keep the lists of all your references somewhere. That includes both the relevant ones (plus their full text) and the ones you excluded from your analysis. Even though there are several tools (including free ones) to support you with the systematic review and specifically with this task, you might still want to consider Google Sheets (like us). One way to organise your Google Sheet is shown below. Feel free to adapt it to your needs.

PMID Author(s) Journal / Book/ Guideline Publication Year DOI Title Abstract Relevance
               

Most importantly, no matter the tool you choose, make it a living document to add any new references and data sources. For example, you can set alerts in Google Scholar to help you with this.

Skim efficiently through the list to identify the relevant publications

We recommend using a two-step screening process. Firstly, review the title and abstract of your compiled list of publications for relevance following the appraisal criteria. Screen the full text of the identified and selected relevant publications for safety and performance data. This is time-consuming and you might even be in a position of paying to access publications. We don’t have a better solution for now. If you find one, let us know! Also, there might be consultants that would recommend you consider using dual screening, e.g., two people to screen the literature to avoid introducing errors and be confident in the findings. The roles are up to you. What matters is to describe how you went through the screening and document everything in e.g., a PRISMA diagram.

Evaluate and weigh your clinical data as a grading system

There are various methods used to appraise and weigh clinical data. The Appendix F of the IMDRF MDCE WG/N56 FINAL:2019 describes a grading system in two tables that we found to be pragmatic and recommend following it. Otherwise, you can define your own criteria and assessment method or rely on other existing approaches such as the ACC/AHA Recommendation System proposed by the American College of Cardiology (ACC) and the American Heart Association (AHA).

Summarise data effectively

The auditors will always appreciate a table over descriptions. Thus, turn unstructured content into structured and easy-to-follow content. Use tables whenever possible, e.g., for data comparison. Also, when you write about clinical data, summarise what your included studies showed, how does this relate to your medical device, which benefits those studies presented. Is your medical device expected to have the same benefits? Which risks were mentioned in those studies, and will those risks apply to your medical device? Thus, make sure to make the linkage between the outcomes you mention in the Clinical Evaluation Plan and how the outcomes were evaluated in the identified studies and what benefits, risks and performance were concluded.

Lastly, if you are using our template for your Clinical Evaluation Report, there are three Sections which you should consider focusing your time on mostly: Clinical Background, Current Knowledge, State of the Art, Section Literature Search, and Section Clinical Data. We recommend starting with writing the Clinical Background, Current Knowledge, State of the Art, especially for situations when there is a clear medical indication. Then, continue with Literature Search followed by the Clinical Data, and finalise the rest of the document. For the identified clinical studies in the scientific literature, consider assessing them for all criteria at once to avoid redundancy in reading a publication multiple times: device names assessed by the authors, relevance based on the literature appraisal criteria, level of the evidence, tendency, comparability, performance, safety, clinical information such as patients, study design, measured outcomes. You can organise the information in tables. Also, I would write the Clinical Evaluation Plan firstly but not try to have a complete version of it done immediately. I would rather come back to refine it after I’m done with the Clinical Evaluation Report. This is because I will know exactly how I identified the relevant publications, the potential safety issues based on those publications, and performance claims.

On a slighty different note: You want to get your medical software certified under MDR but don't know where to start? No worries! That's why we built the Wizard. It's a self-guided video course which helps you create your documentation yourself. No prior knowledge required. You should check it out.

Or, if you're looking for the most awesome (in our opinion) eQMS software to manage your documentation, look no further. We've built Formwork, and it even has a free version!

If you're looking for human help, did you know that we also provide some limited consulting? It's limited because we are not many people. We guide startups from start to finish in their medical device compliance.

Congratulations! You read this far.

Get notified when we post something new.

Sign up for our free newsletter.

Nicoleta Spînu

I am a scientist with a background in Pharmaceutical Sciences and computational modelling passionate about digital health technologies. I joined OpenRegulatory for the shared vision and values. I dream of a world where each patient has equal access to effective and safer AI solutions. Let’s make it happen together!

Comments

If you have any questions or would like to share your opinion publicly, feel free to comment below. If you'd like to reach out privately, send us a message.

No QMS on this planet will save you from creating crappy software.