Hacker News new | past | comments | ask | show | jobs | submit login

Is this better than what was previously achievable using classical structure from motion? It seems worse because of extreme detail loss?

And if they're just plotting a smoothed but similar path through the scenery, didn't Microsoft do that in 2014 with Hyperlapse?




Structure from motion does not produce good visual fidelity on plants. I’m designing a farming robot and I want remote farmers to be able to view a 3D image of the plants to check for issues, so fidelity is very important. I’ve done a lot of experiments with photogrammetry and NERF, while still presenting a lot of technical challenges, seems far superior for this.

I get the sense that they are mostly using the smoothed views as an example of good results on long scenes. Ultimately the point of nerf is novel/arbitrary view synthesis, which you’re not going to get with Hyperlapse.

And NERF of long tracks is exactly what we need to capture a long row of plants at the farm.


This algorithm constructs a 3D environment from the video data - they're just showcasing it with stabalization. Classical methods require better cameras and more meta. Deep learning is an opportunity for more robust methods for the same end, but also do things like estimating lighting and capturing large scenes.


The renderings look a lot prettier but the 3D structure seems not really good.

Most nerfs use classical bundle adjustment (eg Colmap) as an initialization but this one does not, and the authors mention that they leave bundle adjustment for future work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: