Hacker News new | past | comments | ask | show | jobs | submit login

What if doctors get both, the untouched originals and the images with the artifacts removed? Seems like it solves the problem you're concerned with?



The xerox scanners had a setting to disable compression as well. People are lazy and don't enable the compressions. Although they are highly skilled, radiologists don't have time to inspect each image, so why bother looking at the raw originals?

The question is rather: does this feature improve diagnoses? Sure, the images look nicer now. But that's not why they are being created. MRI images are made for inspection by trained radiologists who are already filtering out artifacts. So is this tool better at this job, or does it actually worsen the ability of the radiologists to read the images like those xerox scans?

Maybe I'm a bit paranoid, idk. After all, diffusion MRI is already being used for surgical planning even though it has several shortcomings. But in that instance there are probably no good alternatives, while here the alternative is the trained eye of a radiologist.


It gets even worse than that sometimes. For example, I remember a study from back when digital xray was getting going, where radiologist were asked to say which processing they liked better (since none of them looked quite like the very non-linear film versions) and scored on performance.

They didn't perform best on the types they liked best. This wasn't a great study in terms of power, but it was interesting.

I've met plenty of rad-oncs and radiologists who are convinced they can "read through the noise" just fine, and want consistent imaging more than artifact reduction. I'm not sure how empirically this has ever been tested.


Digital and computed radiography are quite poor examples of progress though, as the resolution is worse and the radiation dose was higher than film radiography. This may have changed in the last few years but was strikingly true at the outset.

The advantages they gave were in every other way (physical storage, availability, duplication, speed at which they could be accessed etc).


The point I was trying to make has nothing to do with image quality.

The issue was, radiologist had to deal with a choice of different post-processing of this data. The processing they said they liked best (somewhat consistently) was not the processing that they performed best on, empirically (somewhat consistently).

This is related to the issue of evaluating the value of ML post processing, we could see a similar effect there. After all one school of thought was that preference was in some sense driving by familiarity rather than what they were actually able to discriminate.

FWIW IQ evaluation in MRI is a somewhat problematic thing anyway, but acceleration certainly tends to make it worse in some ways. It's not obvious how effective various mitigation approaches are.


Thanks - I missed your point. Image quality in MR is very much a moving target too as it varies between patients and there is a far bit of variation in practice. Scans are speed up or slowed down for a variety of reasons. Making a scan faster to fit in another patient or any number of other reasons is something that happens regularly.


Not really. This because the idea is to aid with acceleration, in which case the "untouched originals" were never taken, they are dealing with the impact of not gathering all the data in the frequency domain in the first place, and this is a trade off between "here is the image with artifacts" and "here is the result of an artifact correction algorithm".


So... they have to examine the originals regardless. The new image adds nothing.


It’s even worse than that, it detracts. The whole point is to save time. Running a 10X faster scan and the original scan would make things slower.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: