Hacker News new | past | comments | ask | show | jobs | submit login
Light-shrinking material lets ordinary microscope see in super resolution (phys.org)
133 points by thedday on June 1, 2021 | hide | past | favorite | 32 comments



This article is not that clear (there is no frequency shift occuring). As others say, the authors are using speckle imagery, which relies on the wavevector of the illuminating beam (rather than the frequency). By adding the hyperbolic metamaterial the authors can access wavevectors beyond the diffraction limit, so that once they do the appropriate prost processing achieve super resolution imagery.

It's not directly related but reciprocal space and Fourier imaging is quite interesting for those that are not aware of it (such as estimating the size of a crystal lattice by looking at the diffraction pattern)


Yes - the technique is called ptychography [0] and there have been several recent electron microscopy papers too demonstrating how this technique (image reconstruction from Fourier space patterns) can reach beyond instrumental resolution limits [1, 2].

References:

[0] https://en.wikipedia.org/wiki/Ptychography

[1] 2018 Nature Paper: https://www.nature.com/articles/s41586-018-0298-5 arXiv version: https://arxiv.org/abs/1801.04630

[2] 2021 Science Paper: https://science.sciencemag.org/content/372/6544/826 arXiv version: https://arxiv.org/abs/2101.00465


Most computer folks are actually unaware that physicists were doing fourier transforms using optics long before the FFT existed. You can do physical convolutions using lenses.


...Am I having a stroke? Aren't you just talking about a prism? It converts time domain to frequency/space domain, just like the cochlea in the ear does with mechanical time-series.


No.


Fascinating! Can you explain this a bit more and share some examples?


As it happens, I did this in a physics lab just a couple of weeks ago. The basic setup is that if you pass a collimated beam of light through a mask and then focus it with a lens, the focus will contain the 2D Fourier transform of the pattern on the mask. Adding another lens does another Fourier transform getting the original image back (but flipped). By masking appropriate sections of the Fourier transform, you can physically implement various filters — e.g. a low-pass filter becomes a mask letting through only the central portion of the Fourier transform. One I remember trying is that if you take a periodic pattern like a rectangular grid, and then pass the Fourier transform through a thin slit, you can filter out only the horizontal or only the vertical component of the grid. Pretty cool stuff.


Lenses bend EM waves proportionately to their frequency, so it naturally separates the different frequencies. Bam, you've described your EM source in the frequency domain.


> Lenses bend EM waves proportionately to their frequency

Is this exact? I was under the impression that it's a linear approximation that's generally good enough for optical component glasses over the range of visual wavelengths.

(I always found it a bit frustrating that in my Mechanical Engineering undergraduate classes, they almost always introduced linear approximations without any discussion about the conditions under which the approximations held. Sometimes, they didn't even mention that the linearization was an approximation.)


It depends on the faculty. Postgraduates knows better.


> The wide-field of view image reconstruction takes 10 mins on a desktop computer with a GTX 1080Ti graphics card and a i7-8700k CPU to reconstruct an image with 100 by 100 raw pixels

That's an impressive amount of computation per pixel


It is but this is also fairly ancient system. More modern systems should be able to shrink time needed for computation - also research facilities may have access to a cluster.


I looked it up. GTX1080Ti is 4 years old.


Same 4-year age for the i7-8700k. It's true that it's about half as fast as a modern Ryzen 7 5800X or brand-new Intel i7-11700K, and if you could get a new Nvidia 3080 or AMD RX 6900-XT they'd have a similar doubling in speed, but it's not ancient.

Regardless, does the difference between 5 minutes or 10 minutes for 10,000 pixels really matter? It still means that you're running on the order of a hundred thousand operations per pixel; what can you possibly need to do that requires that much processing?


It's probably doing some equivalent of solving a hard inverse problem approximately using numeric method, likely with as lest as many unknowns per pixel as the image, in a noisy domain, and with an expensive cost function for the optimization.

Not saying they are doing exactly that, but something in that realm/scale. 100kOps per pixel is really not that much in those kind of problems.


I haven't read this paper, but extrapolating from my experience working with other super-resolution scopes: reconstruction. Instead of measuring the pixels directly, you measure some projection of them and then have to solve an inference problem to recover the image.


It's not ancient by any means, but it is a mobile GPU - I've got a GTX 1080 Ti on my Thinkpad X1 Extreme.


A 1080 Ti is not a mobile GPU. Mobile GPUs have the “m” suffix. You just happen to have a (probably underclocked) desktop GPU in your laptop.


The paper is open access and linked from the bottom of the article btw.

[PDF] https://www.nature.com/articles/s41467-021-21835-8.pdf


> The technology consists of a microscope slide that's coated with a type of light-shrinking material called a hyperbolic metamaterial. It is made up of nanometers-thin alternating layers of silver and silica glass.


The title "light-shrinking material" makes it sound like they are adding a layer of something that turns optical wavelengths into UV, but there's a mention of scattering and reconstruction that makes it sound like more might be involved.


I think you are thinking of oil immersion (microscopy) there. They are using a meta material as a superlens though which allows to work around the diffraction limit.

Edit: they seem to be improving an already known technique called "structured illumination microscopy". For that, the sample is illuminated with a light pattern (here: a speckle pattern) and the phase of the light is shifted in the process. After collecting various images, an image of better resolution can be computed. Their improvement seems to be to use a particular meta material that allows to capture far more spatial detail than otherwise.

Links: https://en.wikipedia.org/wiki/Oil_immersion , https://en.wikipedia.org/wiki/Superlens , https://en.wikipedia.org/wiki/Plasmonic_metamaterial#Hyperbo... , https://en.wikipedia.org/wiki/Diffraction-limited_system , https://en.wikipedia.org/wiki/Super-resolution_microscopy#St...


> As light passes through, its wavelengths shorten and scatter to generate a series of random high-resolution speckled patterns.

Speckle allow random illumination with small resolution. They reconstruct several images with differente speckle pattern to obtain better resolution on the object


Is something like this useful for chip fabrication somehow?


I was wondering the same exact thing. Wouldnt this be very helpful for EUV masks?


Yes that's pretty similar to what is used. Each mask takes days to get simulated. It is getting "old" but this great video is covering it https://youtu.be/NGFhc8R_uO4


probably not, speckle is random and they use several images to reconstruct the final result.


I thought they already used holography to make masks with certain repeating patterns.


You'd have to start from the speckles to illuminate it in reverse somehow!


Isn't Differential Interference Contrast able to resolve to 20nm or am I misremembering that?


Actually it was Video Enhanced DIC (VEDIC) https://en.wikipedia.org/wiki/Nanovid_microscopy?wprov=sfla1

It is apparently able to resolve colloidal gold particles 20-40nm in diameter.


Cool, and only about 90 years after Royal Raymond Rife.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: