Oh man, where to start.. these consultants really trigger me. Before I get started, let me say this: I hope this post leads to them becoming unemployed and allocating their working time to more productive areas of society, e.g. delivering parcels, cutting people’s hair, or collecting trash. All of these professions deliver a lot of tangible value to society. The machine learning consultants I’ve seen so far do not.
With that out of the way, lets have a look at the regulatory requirements for machine learning / deep learning / AI (all the same for this post) first before analyzing what these people offer and why it makes not sense at all.
The regulatory requirements first! Here are the regulatory requirements for AI-based medical devices:
- None.
Okay, moving on.. yes, seriously, regulators are so slow to catch on that, so far, there are no harmonized standards (you know, the ISO / IEC ones) regarding AI models in medical devices. There are only some “soft” requirements from notified bodies and some vague interpretations of how to apply the IEC 62304 (from 2006! the stone age!) to AI models – yeah, that concept is about as terrible as it sounds.
The consensus so far seems to be:
- Document your model architecture and why you chose it. Also, do a quick literature review whether it has been used for similar purposes in the past and what the performance was (= published papers).
- Describe your training data, how you labelled it, etc.
- Describe your training performance.
- How will you deploy it?
- Drawbacks and limitations – how will it handle input data it has never seen?
If you think this sounds like writing up a paper for a scientific journal, you’re actually right! This is very similar. The first few bullet points would be somehow mapped to “software planning” of the IEC 62304 and the latter points to “verification”. A huge stretch, but, I mean, documenting these things actually makes sense.
And that’s exactly what’s covered in our Algorithm Validation Template – it contains exactly those headlines and if you fill it out completely you’re already compliant.
That’s it.
Yes, you are already done when you will that out.
No consultants needed.
So where do those shady ML consultants come in now? The truth is that many Healthcare startups haven’t come across this page yet and are contacted by those consultants before they’ve seen the light (this page) (damn, Google should rank us higher!). The pitch of the shady ML consultants goes like this:
“Oh my god, you’re using PyTorch? Daaammmmnnn that’s not “validated” (whatever that means). Super problematic! But no worries we can “validate it for you” (whatever that means again). Also, we’ll do a workshop with you (why?). Let’s schedule an initial call.”
The initial workshop essentially is about the bullet points which I describe above. You read those bullet points in about one minute, while the workshop would take around 8 hours. Imagine that! So much time with so little content. As you might deduce now, yes, those consultants thrive on making things overly complicated, because that’s the only way to expand content of one minute into 8 hours.
The second part consists of them charging your for “validating” your AI training pipeline like PyTorch.
Wait, what’s that? Yes, it’s technically true that all software you use for the development of your medical device needs to be validated – typically that just entails adding it to a table of “software you use” and maybe filling out a form for it in your QMS (we’ll have an article for that in the future, fingers crossed). You can already find all our free templates for that here, they’re easy to fill out: SOP Software Validation, Software Validation List, Software Validation Form.
How much work would that be for ML tools? Maybe an afternoon if you’re doing this for the first time.
And you would be done.
What do the consultants pitch instead? They offer to do this for you and it’ll take multiple weeks. They’ll bill you by the day.
How do they spend all of this time?
Ready for something crazy?
They re-write tests for packages like PyTorch.
I kid you not. It’s like they haven’t heard of the existing test suite yet which accompanies most large open-source projects. They’re just like “okay so we need to “validate” is, we’ll write this comprehensive set of tests for all the functions you use during training and in production”. Yes, they literally ignore that these things are already tested and go ahead and implement them again in a worse way. Why worse? Because, as you might expect, they are less capable than the actual authors of PyTorch.
So, there you have it. I hope I saved you a ton of money. If you were anywhere close to purchasing these services, take that money instead and pay your hard-working employees a nice bonus, tip your local trash collectors, hair dressers or delivery people, or donate it to the animal shelter of your choice. All of those deliver value to society. ML consultants do not.