Hacker News new | past | comments | ask | show | jobs | submit | duckworthd's comments login

Thank you :)


I hope so, but it'll be a good while before we can release anything. We have tight dependencies to other not-yet-OSS libraries, and until they're released, our won't work either.


You're looking at something called a "neural radiance field" backed by a sparse, low resolution voxel grid and a dense high resolution triplane grid. That's a bit of a word soup, but you can think of it like a glowing fog rendered with ray marching.

The benchmark details are a bit complicated. Check out the technical paper's experiment section for the nitty gritty details.


Hahah I'm in formal methods, not graphics, so I'm picturing a literal green glowing fog. But what I'm gathering is polygons partitioned into cube chunks. This is of course not including the particulars of this impressive contribution, but I fear that's beyond my ken.


What's worked well for me: Find a way to put what AI/ML on your critical path. Think of it like learning a new language: classes, lessons, and watching TV helps, but nothing works like full-on immersion. In the context of AI/ML, that means find a way to turn AI/ML into your full-time job or school. It's not easy! But if you do, you'll see endless returns.

If you don't have a solid enough footing to get a job in the field yet, the next best thing in my opinion: find a passion project and keep cooking up new ways to tackle it. On the way to solving your problem, you'll undoubtedly begin absorbing the tools of the trade.

Lastly, consider going back to school (a Bachelor's or Master's, perhaps?). It'll take far more than 1 hour/day, but I promise you, you'll see results far faster and far more concretely than any other learning strategy.

Good luck!

Context: I've been a Researcher/Engineer at Google DeepMind (formerly Google Brain) for the last ~7 years. I studied AI/ML in my BS and MS, but burnt out of a PhD before publishing my first paper. Now I do AI/ML research as a day job.


Yes, I was leaning more towards the "personal project" idea as well, something around document understanding. I subscribe to the "learning by doing/immersion" philosophy as well (upto a large extent).

The problem with projects is one's understanding tends to go more and more specialised, and collaborating/connecting with other ML engineers requires a broader knowledge base sometimes.

Also, for giving advice and useful inputs to others (on their projects), I feel a balanced knowledge base is useful.

Hence the question.


Greg Brockman's blog[1] has few links on how he picked up ML. Another link at [2] describes the path Michal(blog's author) followed (though it's aligned to "how i got into ..."). Both these blogs walk through how they were able to get into the ML bits of things. They have bunch of links (ex: [3]).

I think it'll help if you can get a job at a company who's main focus is ML, you'll talk to folks who are doing research or solving problems using ML, you'll learn. If not, i hope these links help as folks there (people way smarter than me, a swe) had similar question and documented the steps they took to reduce the gaps in their understanding.

[1] - https://blog.gregbrockman.com/how-i-became-a-machine-learnin... [2] - https://agentydragon.com/posts/2023-01-11-how-i-got-to-opena... [3] - https://github.com/jacobhilton/deep_learning_curriculum


Great resources, especially Brockman's blog makes the experiences so much acceptable, knowing that even the top people had to struggle to get going in ML


This should absolutely be possible! The hard part is making it look natural: NeRF models (including SMERF) have no explicit materials or lighting. That means that any character inserted into the game will look out of place.


Why bother to make it look natural when you can have a really awkward greenscreen-like effect for nostalgic and "artistic" purposes?


Oof, there's a lot of machinery here. It depends a lot on your academic background.

I'd recommend starting with a tutorial on neural radiance fields, aka NeRF, (https://sites.google.com/berkeley.edu/nerf-tutorial/home) and an applied overview of Deep Learning with tools like PyTorch or JAX. This line of work is still "cutting edge" research, so a lot of knowledge hasn't been rolled up into textbook or article form yet.


In terms of the live viewer, there's actually no limit on footprint size. 300 m^2 is simply the biggest indoor capture we had!


Thanks for the feedback!

I agree, we could do better with the movement UX. A challenge for another day.


Since the viewer is on GitHub, I’ll take it for a spin.

Are you accepting pull requests?


You're probably thinking of 3D Gaussian Splatting (3DGS), another fantastic approach to real-time novel view synthesis. There's tons of fantastic work being built on 3DGS right now, and the dust has yet to settle with respect to which method is "better". Right now, I can say that SMERF has slightly higher quality on than 3DGS on small scenes and visibly higher quality on big scenes and runs on a wider variety of devices, but takes much longer than 3DGS to train.

https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/


Block-NeRF is a predecessor work that helped inspire SMERF, in fact!

https://waymo.com/research/block-nerf/


Very cool. Thanks!


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: