There also seems to be a bug in the VPN that requires sending all traffic when the VPN address is on a different subnet. It should be possible to manually specify subnet mask, but it seems to be ignored. I’m not sure if the VPN is advertising this incorrectly, but it worked fine before upgrading.
“Free Fillable Forms” is the official IRS system for doing it manually online (no form or income limits). I even messed up my return last year and it got kicked out within an hour of upload with instructions on how to fix it.
> one thing that I was completely unsure about was how to add tests for this fix.
Similar to blaming the file for maintainers, the diff of those commits can direct you to their tests. The full patches that those commits belong to can also be useful for finding undocumented habits that have lead to approval.
Photography. A small cheap camera array could produce higher resolution, alternate angles, and arbitrary lens parameters that would otherwise require expensive or impossible lenses. Then you can render an array of angles for holographic displays.
Lytro added 2 spatial dimensions of info to 2D image capture: the angles the light was traveling at when it entered the camera. They could simulate the image with different camera parameters, which was good for changing depth of field after the fact, but the occlusion information was limited by the diameter of the aperture. They tried to make depth maps, but that extra data was not a silver bullet. As far as I could tell, they were still fundamentally COLMAPing, they just had extra hints to guess with.
This is spot-on. Note that the aperture on the camera was quite large, I want to say on the order of 100mm? They sourced really exotic hardware for that cinema camera.
They also had the "Immerge," which was a ~1m diameter, hexagonal array of 2D cameras. They got the 4D data from having a 2D (spatially distributed) array of 2D samples (each camera's views). It's under sampled, because they threw out most of the light, but using 3D as a prior for reconstructing missing rays is generally pretty effective.
But I also understand a lot of what they demoed at first was smoke and mirrors, plus a lot of traditional 3D VFX workflows. Still impressive as hell at the time, it's just that the tech has progressed significantly since ~2018.
I got as Lytro Illium off Ebay at a reasonable price but it is a bit of a white elephant. I was hoping to shoot stereograms but I haven't been able to do it with the stock software (I just get two images that look the same with no clear disparity)
I've seen open source software for plentopic images which might be able to generate a point cloud but I've only gotten one good shot of the Lytro which was similar to a shot I took with this crazy lens
It isn’t streaming the ollama output so it feels slow (~3 words/second on a 3090 with the defaults). Using ollama directly streams within a second and you can kill it early. I don’t understand the UX of looping responses to the same question either. This does not feel like magic.
I wrote a pixel-based visual diffing algorithm long ago that was intended for a CI tool that finds all of the UI changes in a PR. I broke the layout of a page I didn’t even know existed as an intern at Inkling and have had this idea in my head ever since.
I recently unwrapped linktransformer to get access to some intermediate calculations and realized it was a pretty thin wrapper around SentenceTransformer and DBScan. It would have taken me so much longer to get similar results without copying their defaults and IO flow. It’s easy to take for granted code you didn’t have to develop from scratch. It would be interesting if there was a tool that inlined dependency calls and shook out unvisited branches automatically.