We’re developing machine learning - based software as a medical device. We’ve learned that once you’ve got a certified on the market, you need to get approval by your notified body before releasing significant changes. Now our question is: How does this relate to our machine learning model? Can we update it? Is that a significant change?
Notified bodies currently classify a machine learning model update as a significant change. You need to re-do your verification of your new model (e.g. do some testing with a test set and document the results). Active learning models which change their weights during deployment are currently not allowed.
Regulation and notified bodies are only slowly catching up with machine learning - based software. There’s no official standard or guidance so far (is that a good thing?) so my answer is entirely based on my experience with auditors.
An update of your machine learning model, i.e. an update of its weights, is seen as a significant change. This generally makes sense because it changes the “performance” of your medical device. You need to re-do the verification of your ML model and send that documentation (along with everything else) to your notified body.
Generally speaking, getting approval for this sort of change shouldn’t be hard: Your ML model architecture stays the same and the performance has probably improved - why else would you update the model?
If you change the architecture of your ML model, you may face more scrutiny but just be sure to prove that the decision made sense, your performance hasn’t gotten worse and you haven’t made the risk profile of your software worse.