Hacker News new | past | comments | ask | show | jobs | submit login

Even without anything fancy, is there a speed vs clarity parameter(s) when doing an MRI? It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.



Worse, there is a whole family of parameters.

It's worth thinking of an MRI as a programmable machine for doing certain types of physics experiments.

Sometimes you have an area of interest, sometimes you don't. A lot of the practical (i.e. clinical level, not research work) on specific areas of interest is still in coil design, since body coils often don't do well.

There are all sorts of things that make it difficult (e.g. imaging is in frequency domain, localizing things with gradients can be time consuming in ways not entirely directly related to clarity, etc.)

This sort of thing is addressing issues that come up with acceleration techniques that rely on redundancy in the sampled space to "cheat" and not capture everything. The obvious concern with a ML approach here is that it may replace something interesting with something more normal.

I'd hate to be the one tasked with V&V for this, honestly.


Yes, definitely true for many artifacts! Although due to Nyquist, ghosting artifacts sometimes require you to increase the field of view.

What bothers me here is when the artifacts hide underlying pathology, and these algorithms "learn" what a normal knee mri looks like and just show you that. IMO it is a medical liability that must be addressed.


Yeah, I'm worried how any automatic correction which is not completely specified can be used in medical imaging. We sometimes fail to even compress images correctly (remember the scanners changing numbers due to compression?), so trying to automatically remove artefacts sounds dangerous. We already teach doctors about the artefacts and how to handle them. The image doesn't need to be pretty - just functional.


This is mostly handled by MR techs and it is their job to sort this out. Many of the automated tasks are pretty good, and those that aren’t get rejected fast. We don’t tend to get a new sequence/tool/parameter and just run with it, it’s used with the old one until a degree of trust and understanding is established. I’m an MR tech shirking off.


Yeah, I would trust an MR tech's tried and tested parameters way before trusting any fancy algorithm or even a new sequence.


> It seems an easy improvement would be to spend more time getting a clear picture of the specific area of interest, vs now where the whole scan seems to be done at full clarity.

This is exactly what is done already.

Every method of one can name for reducing scan times is used, and some we can’t name are used too. Speed nearly always comes at the expense of quality, although some acceleration techniques and tech developments have lead to improvements that are pretty much without time penalty. These include signal digitisation at the coil and other methods of getting more for for less (note that this equation doesn’t include money!).


Yes, although currently scans are typically done at "full clarity" following a "standard" clinical protocol that is the same for everyone. It's generally thought that in the future the field will move towards using scans are are more tailored to each particular patient.


Agreed. However the cost of getting to the scanner, getting on and off it, administration and reporting etc needs to be factored in. If you can remove a potential patient recall or repeat scan by doing an extra 3 minute sequence, it’s worth doing.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: