Hacker News new | past | comments | ask | show | jobs | submit login
3D Reconstruction with Spatial Memory (hengyiwang.github.io)
95 points by smusamashah 7 months ago | hide | past | favorite | 13 comments



Completely newbie questions from someone outside the field who hasn't been following closely:

Is the spatial memory a fixed size (how big?) or does it grow over time?

And is there a point at which is is saturated and future results decline?


> Is the spatial memory a fixed size (how big?) or does it grow over time?

Fixed number of parameters

> And is there a point at which is is saturated and future results decline?

The model is statistical, so precision will steady improve, asymptotically approaching a maximum


From my understanding this is using the input images to render a new image from an arbitrary viewpoint. There may be a 3D reconstruction in the pipeline but this package produces a rendered image as the output. Anyone else see it this way? It’s very fast and cool for sure.


Very interesting stuff. I wonder how this one camera (one viewpoint), flat images models work in completely novel environments (not seen in training data). I am wondering if this model could be used with stereo cameras as is.


Two authors (so, one, lol)!

Truly impressive, congrats to the young grad :D.


> Two authors (so, one, lol)!

What do you mean?


First one did the work, second one is the professor of the lab they are in.


From the previews I'm guessing this isn't going to be any use for 3d scanning?


Can this result in a colmap dataset that can be used by Gaussian Splatting generation?


There would not be much point. Colmap is already very capable in reconstructing a 3D scene from images from unknown poses if you have the camera intrinsics.

Besides processing speed, this project (and the underlying dust3r model) strength is that it works with very few images. You basically just need 2, and it can infer pseudo instrinsics and matching extrinsics on it's own.

I don't see why it could not be adapted to output gaussian splats instead. As a matter of fact, it's already been done with dust3r: https://github.com/nerlfield/wild-gaussian-splatting.


Colmap is also very slow for large scenes. Replacing Colmap with something faster would be a huge improvement for 3DGS pipelines. But Spann3r isn't there yet imo


This relies on Dust3r underneath as part of its stack (I didn’t read carefully enough to tell you if it’s training or inference but I think it’s training), which outputs splats. What’s special about this is that it outputs really dense nice point clouds with arbitrary photos. We have a lot more tools that work well with point clouds than with splats, so this is nice work.


Interesting! How well does it handle scenes containing moving objects?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: