A physician's $0.02 - The clinical relevance of FB's work is clearly stated in the blog post: "While state-of-the-art facilities today use 3 Tesla MRI machines, scanners with lower-strength magnets (1.5 Tesla, for example) are still commonly used around the world." Considering that a 1.5T MRI machine costs about $1M less than a comparable 3T model (+/- the cost of warranty, support, and installation), FB's work in this area has the potential to make a BIG positive impact on the lives of millions of patients. Which is why I will be cheering them on.
If they reproduce their results in other clinical settings, the immediate impact on patient care includes:
1) accelerating diagnosis (and treatment) for patients with traumatic brain injuries (by effectively up-scaling lower resolution scans)
2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment
3) cancer patients in rural America could be monitored for treatment response in a setting that is closer to home (because rural communities tend to be resource-poor in terms of medical technology).
If we consider that a logical extension of their work could be to develop a compression algorithm for MRI data, then it's easy to see an even broader impact that includes: 1) connecting rural patients with high-quality radiologist services (i.e. remote MRI interpretations), and
2) decrease the cost of long-term storage, access, and retrieval for MRI data.
On the topic of FB's issues with privacy: I agree that FB has a long way to earn my trust as a doctor and a patient. That being said, it's important to give credit where credit is due. It seems that FB gained access to the imaging data by working collaboratively with NYU on this specific project. By comparison, it's an open secret among those of us in the biomedical informatics community that over the course of many years Google Cloud has quietly gained access to the personal health information of millions of Americans. So, when it comes to privacy concerns, it's important to avoid being myopic - the concern is valid, but the primary threat may not be as obvious as it first seems.
> 2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment
I am VERY pessimistic about this. I don't know how well you know medical equipment providers but this will never be sold as a low-cost "upgrade" to existing machines. It will be sold with new equipment only and with a hefty surcharge as an option enabling higher patient throughput.
There is no real money in upgrades. Most equipment lasts only 8-10 years anyway.
Your point is well-taken. I agree that such an upgrade is unlikely to be sold as a standalone product. What is more likely to happen is that it will be included for a nominal fee as an add-on to a new purchase or service agreement.
To understand how this would work, we need to 1) understand the lifecycle of big-ticket medical equipment (ME) and 2) recognize that ME products are at the core of multiple revenue streams. The first point has to do with the renewed/refurbished market for used/last-generation ME. The second point has to do with the service agreements/warranties/support contracts that are needed in order to keep the ME operational. These factors combine to yield a sales process with multiple negotiating dimensions.
How these negotiations actually play out depends on whether you're a deep-pocketed healthcare system or not (it sucks, but it's true). If you can afford it, you'll have lots of ways to sport the latest and greatest ME without breaking the bank on any single purchase. Some of your old stuff will end up in the renewed/refurbished ME market, thereby offsetting your total cost of ownership (either directly or indirectly). Once used ME hits secondary markets, the customer profile changes: these customers are not looking to keep up with the Cleveland Clinics and Stanford's of the world. They're looking for long-term value, so reliability and longevity is top priority - and this is where I see software "upgrades" coming into play. Some of these customers may already have one or two MRIs, while others may not. In either case, the software "upgrade" becomes a differentiator that speaks directly to the priorities of these customers.
TL;DR - Today, healthcare providers with limited financial resources (e.g. those in developing countries, rural areas) are incentivized to purchase capital equipment through "discounts" on service/support. In the future, we're likely to see software "upgrades" (such as those made possible by FB's work) bundled/leveraged as an incentive. The net effect is the same: extend the clinically useful lifespan of medical equipment (MRIs in this case) and greater access to medical technology around the world.
A lot of people here are rightly concerned about the dangers of falsely marking something as an artifact, but let me present additional data that will hopefully sway you a little bit...
If you need an MRI or a CT of an area adjacent to orthopedic implants, you are currently 100% SOL because distortion or reflection artifacts from the metal completely destroy the imagery across a medically significant distance. There are computational filtering techniques for reducing these artifacts, but, respectfully, they are still really terrible, and close to the implants you can't see shit. All advancements in this area short of inventing new imaging physics will most likely be purely computational corrections. Consider that.
Computational filtering techniques are difficult for a good reason. In the case of CT, high density objects like metal implants produce beam hardening by preventing the low energy photons from reaching the detector. With adversarial training, you can train a network to recognize and remove the artifacts, but you won't be able to reconstruct structures for which there is no physical measurement.
There were similar discussions a few year ago when deep learning was not commonly used yet and compressed sensing was the hot topic of the moment. It can reconstruct MRI or CT images from limited data (and thus allows for quick MR scans or low dose CT) but you have to satisfy a sparsity condition that is seldom granted. There are a few use cases (like MR angiography) where the data is sparse enough and compressed sensing works great.
For deep learning techniques, you need to be very cautious about which structures your network may remove or introduce.
This _is_ computational filtering. It's not philosophically any different. Every filtering method algorithmically guesses what's important or what's real and what's not.
I disagree. I think using techniques that work by attempting to model physical processes that we understand are philosophically different from ML approaches that are learning arbitrary functions.
I agree with your earlier point, but disputing the usage of the term computational filtering here is truly pedantic. Yes, by definition machine learning approaches are a subset of computational approaches, but there are clear differences in terms of (at least) failure modes between machine learning and other techniques. In context, "non-machine-learning based filtering methods" is what was being referred to.
Importantly, the internals of non-machine-learning based approaches are more readily understandable and their output is much more predictable.
I'm no fan of this. What if it treats a tumor as an artifact? This reminds me of the xerox scandal about broken OCR that erroneously deduplicated parts of images that had different contents.
This module might work well, but the modules by cheap competitors might have such behaviour, and it's extremely hard to test that an implementation is bug free.
The Xerox OCR problem is exactly what came to mind after reading the first few sentences t
of the article. And that problem happened well after the times when OCR of standard text had been considered a "difficult" problem. That said, I'm not against using this sort of development, I just think they need to be treated with skepticism and constantly evaluated. If deployed widely, some percentage of scans should always be evaluated from a QA perspective to always be vigilant of misclassification, drift, etc.
The xerox scanners had a setting to disable compression as well. People are lazy and don't enable the compressions. Although they are highly skilled, radiologists don't have time to inspect each image, so why bother looking at the raw originals?
The question is rather: does this feature improve diagnoses? Sure, the images look nicer now. But that's not why they are being created. MRI images are made for inspection by trained radiologists who are already filtering out artifacts. So is this tool better at this job, or does it actually worsen the ability of the radiologists to read the images like those xerox scans?
Maybe I'm a bit paranoid, idk. After all, diffusion MRI is already being used for surgical planning even though it has several shortcomings. But in that instance there are probably no good alternatives, while here the alternative is the trained eye of a radiologist.
It gets even worse than that sometimes. For example, I remember a study from back when digital xray was getting going, where radiologist were asked to say which processing they liked better (since none of them looked quite like the very non-linear film versions) and scored on performance.
They didn't perform best on the types they liked best. This wasn't a great study in terms of power, but it was interesting.
I've met plenty of rad-oncs and radiologists who are convinced they can "read through the noise" just fine, and want consistent imaging more than artifact reduction. I'm not sure how empirically this has ever been tested.
Digital and computed radiography are quite poor examples of progress though, as the resolution is worse and the radiation dose was higher than film radiography. This may have changed in the last few years but was strikingly true at the outset.
The advantages they gave were in every other way (physical storage, availability, duplication, speed at which they could be accessed etc).
The point I was trying to make has nothing to do with image quality.
The issue was, radiologist had to deal with a choice of different post-processing of this data. The processing they said they liked best (somewhat consistently) was not the processing that they performed best on, empirically (somewhat consistently).
This is related to the issue of evaluating the value of ML post processing, we could see a similar effect there. After all one school of thought was that preference was in some sense driving by familiarity rather than what they were actually able to discriminate.
FWIW IQ evaluation in MRI is a somewhat problematic thing anyway, but acceleration certainly tends to make it worse in some ways. It's not obvious how effective various mitigation approaches are.
Thanks - I missed your point.
Image quality in MR is very much a moving target too as it varies between patients and there is a far bit of variation in practice. Scans are speed up or slowed down for a variety of reasons. Making a scan faster to fit in another patient or any number of other reasons is something that happens regularly.
Not really. This because the idea is to aid with acceleration, in which case the "untouched originals" were never taken, they are dealing with the impact of not gathering all the data in the frequency domain in the first place, and this is a trade off between "here is the image with artifacts" and "here is the result of an artifact correction algorithm".
Please don’t make sweeping, generalizing opinions on the implications of the work. It’s a subjective problem to solve, so if are not a radiologist who has first-hand experience with this issue, stop.
Here are the results from the paper:
The radiologists ranked our adversarial approach as better than the standard and dithering approaches with an aver-
age rank of 2.83 out of a possible 3. This result is statisti- cally significantly better than either alternative with p-values 1.09 × 10−11 and 2.18 × 10−11 respectively, and the adver- sarial approach was ranked as the best or tied for best in 85.8% of 120 total evaluations (95% CI: 0.78-0.91). The dithering approach is also statistically significantly better than the standard approach.
We also asked radiologists if banding was present (in any form) in the reconstructions in each case. This evaluation is highly subjective, as “banding” is hard to define in a pre- cise enough way to ensure consistency between evaluators. Considering each radiologist’s evaluation independently, on average banding is still reported to be present in 72.5% (95% CI: 0.62-0.82) of cases even with the adversarial learn- ing penalty. The radiologists were not consistent in their rankings; the overall percentages reported by the six radiol- ogists were 20%, 75%, 75%, 80%, 85%, and 100% for the adversarial reconstructions. In contrast, for the baseline and dithered reconstructions, only one radiologist reported less than 100% presence of banding for each method (80% and 85% presence respectively, from different radiologists).
We believe these numbers could be improved if more tuning went into the model; however, it’s also possible that features of the sub-sampled reconstructions generally may be con- fused with banding, and so any method using sub-sampling might be considered by radiologists as having banding. Sub- sampled reconstructions generally have cleaner regional boundaries and lower noise levels than the corresponding ground-truth.
Intuitively I don't see that there's much value in asking radiologists to subjectively "rank" the images. Surely the thing that needs to be tested here is patient outcomes?
That needs to be tested eventually - there's a reason we go from petridish testing to animal testing to human testing with medicine, it stands to reason that medical tools should follow similar stages.
Even without anything fancy, is there a speed vs clarity parameter(s) when doing an MRI? It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.
It's worth thinking of an MRI as a programmable machine for doing certain types of physics experiments.
Sometimes you have an area of interest, sometimes you don't. A lot of the practical (i.e. clinical level, not research work) on specific areas of interest is still in coil design, since body coils often don't do well.
There are all sorts of things that make it difficult (e.g. imaging is in frequency domain, localizing things with gradients can be time consuming in ways not entirely directly related to clarity, etc.)
This sort of thing is addressing issues that come up with acceleration techniques that rely on redundancy in the sampled space to "cheat" and not capture everything. The obvious concern with a ML approach here is that it may replace something interesting with something more normal.
I'd hate to be the one tasked with V&V for this, honestly.
Yes, definitely true for many artifacts! Although due to Nyquist, ghosting artifacts sometimes require you to increase the field of view.
What bothers me here is when the artifacts hide underlying pathology, and these algorithms "learn" what a normal knee mri looks like and just show you that. IMO it is a medical liability that must be addressed.
Yeah, I'm worried how any automatic correction which is not completely specified can be used in medical imaging. We sometimes fail to even compress images correctly (remember the scanners changing numbers due to compression?), so trying to automatically remove artefacts sounds dangerous. We already teach doctors about the artefacts and how to handle them. The image doesn't need to be pretty - just functional.
This is mostly handled by MR techs and it is their job to sort this out. Many of the automated tasks are pretty good, and those that aren’t get rejected fast. We don’t tend to get a new sequence/tool/parameter and just run with it, it’s used with the old one until a degree of trust and understanding is established. I’m an MR tech shirking off.
> It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.
This is exactly what is done already.
Every method of one can name for reducing scan times is used, and some we can’t name are used too. Speed nearly always comes at the expense of quality, although some acceleration techniques and tech developments have lead to improvements that are pretty much without time penalty. These include signal digitisation at the coil and other methods of getting more for for less (note that this equation doesn’t include money!).
Yes, although currently scans are typically done at "full clarity" following a "standard" clinical protocol that is the same for everyone. It's generally thought that in the future the field will move towards using scans are are more tailored to each particular patient.
Agreed. However the cost of getting to the scanner, getting on and off it, administration and reporting etc needs to be factored in. If you can remove a potential patient recall or repeat scan by doing an extra 3 minute sequence, it’s worth doing.
No thanks. If it can remove artifacts, it can also introduce them. Nobody should be using this on patients. This is a straightforward misapplication of AI.
Facebook has absolutely no reason to be doing work with healthcare. Sure they have great computing power and top engineering talent to figure out how to sell more ads, but the trade-off for any educational facility to freely hand over medical data (de-identified or not) is wreckless.
What till you find out about the Google / Ascension partnership.
I trust seasoned talent being paid hundreds of thousands of dollars a year in partnership with equally well paid healthcare professionals over PhD students scraping by on grant dollars, keeping their code and datasets in a private GitHub repo that will never see the light of day except for a citation in research papers of other scholars.
Not trying to be mean, but if Facebook is trying to fix their moral compass with dollars, go for it.
The tl;dr (in microscopy but apparently also in mri) is AI imaging can evidently enable new concrete solutions to intractable imaging problems, but the failure modes are really treacherous. The example on slide 39, taken from another excellent review paper, does a great job illustrating the problem. I think these methods will get more trustworthy, but i wouldn't stake my life (or my paper's prestigious research results) on them at the moment.
I went to a medical imaging workshop recently, and the consensus was that deep learning approaches will completely replace classical compressed sensing. They are using the same principles of acquiring randomized samples, so it's still compressed sensing, they just produce dramatically better results than classical CS techniques.
> They are using the same principles of acquiring randomized samples, so it's still compressed sensing,
See the second part of my comment. This is only in principle. In practice compressed sensing uses a higher frequency basis and more importantly, this basis is generally not learned, preventing common case bias. Ie a rare condition won't be ignored because it isn't statistically common enough for the NN model to learn.
If they reproduce their results in other clinical settings, the immediate impact on patient care includes: 1) accelerating diagnosis (and treatment) for patients with traumatic brain injuries (by effectively up-scaling lower resolution scans) 2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment 3) cancer patients in rural America could be monitored for treatment response in a setting that is closer to home (because rural communities tend to be resource-poor in terms of medical technology).
If we consider that a logical extension of their work could be to develop a compression algorithm for MRI data, then it's easy to see an even broader impact that includes: 1) connecting rural patients with high-quality radiologist services (i.e. remote MRI interpretations), and 2) decrease the cost of long-term storage, access, and retrieval for MRI data.
On the topic of FB's issues with privacy: I agree that FB has a long way to earn my trust as a doctor and a patient. That being said, it's important to give credit where credit is due. It seems that FB gained access to the imaging data by working collaboratively with NYU on this specific project. By comparison, it's an open secret among those of us in the biomedical informatics community that over the course of many years Google Cloud has quietly gained access to the personal health information of millions of Americans. So, when it comes to privacy concerns, it's important to avoid being myopic - the concern is valid, but the primary threat may not be as obvious as it first seems.