A clinical evaluation is required for all medical devices according to the MDR. The main task of a clinical evaluation is to identify pertinent data in relation to your software device and similar ones. This will help you to prove the intended use of your software device and the clinical claims, that your software device is safe, presents no risks or the benefits outweigh the risks, and ideally, the benefits outperform any existing devices, e.g., by comparing your software device with the existing ones (if any) or any studies from peer-reviewed publications, medical guidelines and reports. This is done through a systematic literature review and thus, better formalising an efficient process to avoid any potential pitfalls including notified body non-conformance.
If you want a brief overview of the Clinical Evaluation Report, check out our article Clinical Evaluation Report (CER) For Medical Devices: 3 Easy Steps. To understand what the notified bodies are looking for, our template Literature Evaluation Checklist summarises that. The common points of failure that we have heard about seem to be incomplete search coverage, incomplete audit trial, and data integrity/data errors.
A pragmatic literature review approach implies a simplified, repeatable, reproducible, transparent, reusable process. Below are some practical considerations to help you conduct a high-quality literature review to produce quality data output for your Clinical Evaluation Report.
Optimise your search terms strategy as early as possible
Unfortunately, you will be able to come up with a strategy for the terms to be used to screen databases through trial and error. Our approach starts by defining all possible MeSH terms that describe the intended use of your medical device and the intervention/therapy your medical device aims for. We then check for the most recent publications, e.g., a recently published systematic review and/or a meta-analysis, and write down the MeSH terms those authors used. We then define a few search queries and refine them if needed in order to cover the appraisal criteria and/or target systematic reviews and randomized controlled trials.
Most importantly, you will have to describe explicitly your search terms strategy, i.e., how you identified the relevant publications. If someone reads the Section Literature Search Protocol and follows it, the same list(s) of publications should be retrieved.
If you have the fear of not missing references or are willing to explore new fancy ways of deriving existing knowledge and have resources for that, you might want to consider natural language processing (models developed by Hugging Face, OpenAI) or network-derived tools (Inciteful, Open Knowledge Maps, to name a few). You might also want to consider checking the references for any similar medical device on the market.
Always capture reasons for the inclusion and exclusion of your literature
There is a big chance that auditors ask you why you excluded a certain study. Writing the inclusion and exclusion criteria in your Literature Search Protocol will ease not only the argumentation of the choices made but also, the evaluation process itself.
Some examples of inclusion criteria we follow:
- Type of studies, e.g., randomized controlled trials, systematic reviews/meta-analysis, observational studies (prospective/retrospective cohort studies, case-control/cross-sectional studies, case reports/case series/other non-analytic studies),
- The medical condition/disease,
- The intervention/method/digital therapy,
- Characteristics of patients/user profile, e.g., adults 18 years and older.
Some examples of exclusion criteria we follow:
- Duplicate references,
- Not available in English and German,
- Other medical areas,
- Animal studies.
Ensure using multiple data sources
Pulling data from multiple sources gives your report credibility in that all evidence was identified. On the other hand, this leads to additional work, mainly making sure of excluding duplications of references. One way we deduplicate our lists is by retrieving the PMIDs from the PubMed of all the searches. PubMed allows you to download the lists as a CSV file. We then use either a Python workflow or any other tools such as KNIME to identify and delete the duplicates. You can do this as well in Google Sheets by checking for conditional formatting based on any other variable and not necessarily the PMIDs, e.g., titles, list of authors etc.
Document all the references
You have to keep the lists of all your references somewhere. That includes both the relevant ones (plus their full text) and the ones you excluded from your analysis. Even though there are several tools (including free ones) to support you with the systematic review and specifically with this task, you might still want to consider Google Sheets (like us). One way to organise your Google Sheet is shown below. Feel free to adapt it to your needs.
|PMID||Author(s)||Journal / Book/ Guideline||Publication Year||DOI||Title||Abstract||Relevance|
Most importantly, no matter the tool you choose, make it a living document to add any new references and data sources. For example, you can set alerts in Google Scholar to help you with this.
Skim efficiently through the list to identify the relevant publications
We recommend using a two-step screening process. Firstly, review the title and abstract of your compiled list of publications for relevance following the appraisal criteria. Screen the full text of the identified and selected relevant publications for safety and performance data. This is time-consuming and you might even be in a position of paying to access publications. We don’t have a better solution for now. If you find one, let us know! Also, there might be consultants that would recommend you consider using dual screening, e.g., two people to screen the literature to avoid introducing errors and be confident in the findings. The roles are up to you. What matters is to describe how you went through the screening and document everything in e.g., a PRISMA diagram.
Evaluate and weigh your clinical data as a grading system
There are various methods used to appraise and weigh clinical data. The Appendix F of the IMDRF MDCE WG/N56 FINAL:2019 describes a grading system in two tables that we found to be pragmatic and recommend following it. Otherwise, you can define your own criteria and assessment method or rely on other existing approaches such as the ACC/AHA Recommendation System proposed by the American College of Cardiology (ACC) and the American Heart Association (AHA).
Summarise data effectively
The auditors will always appreciate a table over descriptions. Thus, turn unstructured content into structured and easy-to-follow content. Use tables whenever possible, e.g., for data comparison. Also, when you write about clinical data, summarise what your included studies showed, how does this relate to your medical device, which benefits those studies presented. Is your medical device expected to have the same benefits? Which risks were mentioned in those studies, and will those risks apply to your medical device? Thus, make sure to make the linkage between the outcomes you mention in the Clinical Evaluation Plan and how the outcomes were evaluated in the identified studies and what benefits, risks and performance were concluded.
Lastly, if you are using our template for your Clinical Evaluation Report, there are three Sections which you should consider focusing your time on mostly: Clinical Background, Current Knowledge, State of the Art, Section Literature Search, and Section Clinical Data. We recommend starting with writing the Clinical Background, Current Knowledge, State of the Art, especially for situations when there is a clear medical indication. Then, continue with Literature Search followed by the Clinical Data, and finalise the rest of the document. For the identified clinical studies in the scientific literature, consider assessing them for all criteria at once to avoid redundancy in reading a publication multiple times: device names assessed by the authors, relevance based on the literature appraisal criteria, level of the evidence, tendency, comparability, performance, safety, clinical information such as patients, study design, measured outcomes. You can organise the information in tables. Also, I would write the Clinical Evaluation Plan firstly but not try to have a complete version of it done immediately. I would rather come back to refine it after I’m done with the Clinical Evaluation Report. This is because I will know exactly how I identified the relevant publications, the potential safety issues based on those publications, and performance claims.