top | item 22416677

FastMRI leverages adversarial training to remove image artifacts

95 points| olibaw | 6 years ago |ai.facebook.com

52 comments

order

DataDrivenMD|6 years ago

A physician's $0.02 - The clinical relevance of FB's work is clearly stated in the blog post: "While state-of-the-art facilities today use 3 Tesla MRI machines, scanners with lower-strength magnets (1.5 Tesla, for example) are still commonly used around the world." Considering that a 1.5T MRI machine costs about $1M less than a comparable 3T model (+/- the cost of warranty, support, and installation), FB's work in this area has the potential to make a BIG positive impact on the lives of millions of patients. Which is why I will be cheering them on.

If they reproduce their results in other clinical settings, the immediate impact on patient care includes: 1) accelerating diagnosis (and treatment) for patients with traumatic brain injuries (by effectively up-scaling lower resolution scans) 2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment 3) cancer patients in rural America could be monitored for treatment response in a setting that is closer to home (because rural communities tend to be resource-poor in terms of medical technology).

If we consider that a logical extension of their work could be to develop a compression algorithm for MRI data, then it's easy to see an even broader impact that includes: 1) connecting rural patients with high-quality radiologist services (i.e. remote MRI interpretations), and 2) decrease the cost of long-term storage, access, and retrieval for MRI data.

On the topic of FB's issues with privacy: I agree that FB has a long way to earn my trust as a doctor and a patient. That being said, it's important to give credit where credit is due. It seems that FB gained access to the imaging data by working collaboratively with NYU on this specific project. By comparison, it's an open secret among those of us in the biomedical informatics community that over the course of many years Google Cloud has quietly gained access to the personal health information of millions of Americans. So, when it comes to privacy concerns, it's important to avoid being myopic - the concern is valid, but the primary threat may not be as obvious as it first seems.

orr721|6 years ago

> 2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment

I am VERY pessimistic about this. I don't know how well you know medical equipment providers but this will never be sold as a low-cost "upgrade" to existing machines. It will be sold with new equipment only and with a hefty surcharge as an option enabling higher patient throughput.

There is no real money in upgrades. Most equipment lasts only 8-10 years anyway.

ebg13|6 years ago

A lot of people here are rightly concerned about the dangers of falsely marking something as an artifact, but let me present additional data that will hopefully sway you a little bit...

If you need an MRI or a CT of an area adjacent to orthopedic implants, you are currently 100% SOL because distortion or reflection artifacts from the metal completely destroy the imagery across a medically significant distance. There are computational filtering techniques for reducing these artifacts, but, respectfully, they are still really terrible, and close to the implants you can't see shit. All advancements in this area short of inventing new imaging physics will most likely be purely computational corrections. Consider that.

efournie|6 years ago

Computational filtering techniques are difficult for a good reason. In the case of CT, high density objects like metal implants produce beam hardening by preventing the low energy photons from reaching the detector. With adversarial training, you can train a network to recognize and remove the artifacts, but you won't be able to reconstruct structures for which there is no physical measurement.

There were similar discussions a few year ago when deep learning was not commonly used yet and compressed sensing was the hot topic of the moment. It can reconstruct MRI or CT images from limited data (and thus allows for quick MR scans or low dose CT) but you have to satisfy a sparsity condition that is seldom granted. There are a few use cases (like MR angiography) where the data is sparse enough and compressed sensing works great.

For deep learning techniques, you need to be very cautious about which structures your network may remove or introduce.

vardump|6 years ago

I think I'd prefer radiologists use both computationally filtered and this. Computational filtering has also advanced over the years.

est31|6 years ago

I'm no fan of this. What if it treats a tumor as an artifact? This reminds me of the xerox scandal about broken OCR that erroneously deduplicated parts of images that had different contents.

This module might work well, but the modules by cheap competitors might have such behaviour, and it's extremely hard to test that an implementation is bug free.

ineedasername|6 years ago

The Xerox OCR problem is exactly what came to mind after reading the first few sentences t of the article. And that problem happened well after the times when OCR of standard text had been considered a "difficult" problem. That said, I'm not against using this sort of development, I just think they need to be treated with skepticism and constantly evaluated. If deployed widely, some percentage of scans should always be evaluated from a QA perspective to always be vigilant of misclassification, drift, etc.

yboris|6 years ago

What if doctors get both, the untouched originals and the images with the artifacts removed? Seems like it solves the problem you're concerned with?

Brakenshire|6 years ago

Is it extremely difficult? I’d have thought quantifying the error rate for a particular application would be relatively easy.

mikeortman|6 years ago

Please don’t make sweeping, generalizing opinions on the implications of the work. It’s a subjective problem to solve, so if are not a radiologist who has first-hand experience with this issue, stop.

Here are the results from the paper:

The radiologists ranked our adversarial approach as better than the standard and dithering approaches with an aver- age rank of 2.83 out of a possible 3. This result is statisti- cally significantly better than either alternative with p-values 1.09 × 10−11 and 2.18 × 10−11 respectively, and the adver- sarial approach was ranked as the best or tied for best in 85.8% of 120 total evaluations (95% CI: 0.78-0.91). The dithering approach is also statistically significantly better than the standard approach. We also asked radiologists if banding was present (in any form) in the reconstructions in each case. This evaluation is highly subjective, as “banding” is hard to define in a pre- cise enough way to ensure consistency between evaluators. Considering each radiologist’s evaluation independently, on average banding is still reported to be present in 72.5% (95% CI: 0.62-0.82) of cases even with the adversarial learn- ing penalty. The radiologists were not consistent in their rankings; the overall percentages reported by the six radiol- ogists were 20%, 75%, 75%, 80%, 85%, and 100% for the adversarial reconstructions. In contrast, for the baseline and dithered reconstructions, only one radiologist reported less than 100% presence of banding for each method (80% and 85% presence respectively, from different radiologists). We believe these numbers could be improved if more tuning went into the model; however, it’s also possible that features of the sub-sampled reconstructions generally may be con- fused with banding, and so any method using sub-sampling might be considered by radiologists as having banding. Sub- sampled reconstructions generally have cleaner regional boundaries and lower noise levels than the corresponding ground-truth.

p1necone|6 years ago

Intuitively I don't see that there's much value in asking radiologists to subjectively "rank" the images. Surely the thing that needs to be tested here is patient outcomes?

mustachionut|6 years ago

Even without anything fancy, is there a speed vs clarity parameter(s) when doing an MRI? It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.

ska|6 years ago

Worse, there is a whole family of parameters.

It's worth thinking of an MRI as a programmable machine for doing certain types of physics experiments.

Sometimes you have an area of interest, sometimes you don't. A lot of the practical (i.e. clinical level, not research work) on specific areas of interest is still in coil design, since body coils often don't do well.

There are all sorts of things that make it difficult (e.g. imaging is in frequency domain, localizing things with gradients can be time consuming in ways not entirely directly related to clarity, etc.)

This sort of thing is addressing issues that come up with acceleration techniques that rely on redundancy in the sampled space to "cheat" and not capture everything. The obvious concern with a ML approach here is that it may replace something interesting with something more normal.

I'd hate to be the one tasked with V&V for this, honestly.

throwaway4220|6 years ago

Yes, definitely true for many artifacts! Although due to Nyquist, ghosting artifacts sometimes require you to increase the field of view.

What bothers me here is when the artifacts hide underlying pathology, and these algorithms "learn" what a normal knee mri looks like and just show you that. IMO it is a medical liability that must be addressed.

lostlogin|6 years ago

> It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.

This is exactly what is done already.

Every method of one can name for reducing scan times is used, and some we can’t name are used too. Speed nearly always comes at the expense of quality, although some acceleration techniques and tech developments have lead to improvements that are pretty much without time penalty. These include signal digitisation at the coil and other methods of getting more for for less (note that this equation doesn’t include money!).

zn473|6 years ago

Yes, although currently scans are typically done at "full clarity" following a "standard" clinical protocol that is the same for everyone. It's generally thought that in the future the field will move towards using scans are are more tailored to each particular patient.

lvs|6 years ago

No thanks. If it can remove artifacts, it can also introduce them. Nobody should be using this on patients. This is a straightforward misapplication of AI.

throwlaplace|6 years ago

Isn't this basically SRGAN?

Edit: sorry I guess since there's an explicit rotation module it's closer to SRGAN+deformable convolutions.

zn473|6 years ago

The adversary in this work never sees non-reconstructed images, so it looks like it's completely unrelated to SRGAN.

voicedYoda|6 years ago

Facebook has absolutely no reason to be doing work with healthcare. Sure they have great computing power and top engineering talent to figure out how to sell more ads, but the trade-off for any educational facility to freely hand over medical data (de-identified or not) is wreckless.

mikeortman|6 years ago

What till you find out about the Google / Ascension partnership.

I trust seasoned talent being paid hundreds of thousands of dollars a year in partnership with equally well paid healthcare professionals over PhD students scraping by on grant dollars, keeping their code and datasets in a private GitHub repo that will never see the light of day except for a citation in research papers of other scholars.

Not trying to be mean, but if Facebook is trying to fix their moral compass with dollars, go for it.

nradov|6 years ago

What exactly is the concern with de-identified medical data? This is common practice in medical research and explicitly allowed under federal law.

heyitsguay|6 years ago

I just gave a talk on similar work being done in microscopy: https://leapmanlab.github.io/nihai/jan20/ .

The tl;dr (in microscopy but apparently also in mri) is AI imaging can evidently enable new concrete solutions to intractable imaging problems, but the failure modes are really treacherous. The example on slide 39, taken from another excellent review paper, does a great job illustrating the problem. I think these methods will get more trustworthy, but i wouldn't stake my life (or my paper's prestigious research results) on them at the moment.

deepnotderp|6 years ago

This is a bad idea, neural nets upscale by "hallucinating" in the details. That's fine for videos for entertainment, not for medical imaging.

And this is distinctly different from compressed sensing which uses a high frequency and mathematical basis.

zn473|6 years ago

I went to a medical imaging workshop recently, and the consensus was that deep learning approaches will completely replace classical compressed sensing. They are using the same principles of acquiring randomized samples, so it's still compressed sensing, they just produce dramatically better results than classical CS techniques.

2ndwind|6 years ago

Does anyone know how this differs from what Subtle Medical is doing? https://subtlemedical.com/

rontoes|6 years ago

Similar approach, although Subtle Medical are not using adversarial training, just plain old conv-nets with a non-adversarial loss.

Gatsky|6 years ago

Is anyone else concerned that facebook has an interest in MRI?

lokimedes|6 years ago

Could be useful for SAR and SAS imagery as well perhaps.