Hacker News new | past | comments | ask | show | jobs | submit login
Neural Renderer and Differentiable Rendering [video] (youtube.com)
86 points by abj on Dec 14, 2019 | hide | past | favorite | 14 comments



Self-Plug: If you're interested in a Jupyter notebook tutorial on implementing a basic differentiable path tracer (a physically based rendering algorithm), I wrote up a tutorial here: https://blog.evjang.com/2019/11/jaxpt.html


Great notebooks! While looking at the "cornell_box_pt_numpy.ipynb" one I noticed a few things odd:

- The input colours to the walls are scaled by somehow arbitrary values and they don't seem to match the spectral reflectance distributions converted to, for example, sRGB:

    White
    -----
    Jupyter Notebook             : [255, 239, 196] / 255
    sRGB (Scene-Referred Linear) : [ 0.73825082  0.73601591  0.73883266]
    sRGB (Output-Referred)       : [ 0.87468844  0.87351472  0.87499367]

    Green
    -----
    Jupyter Notebook             : [.117, .4125, .115] * 1.5
    sRGB (Scene-Referred Linear) : [ 0.06816358  0.39867395  0.08645648]
    sRGB (Output-Referred)       : [ 0.28953442  0.66418933  0.32540958]

    Red
    ---
    Jupyter Notebook             : [.611, .0555, .062] * 1.5
    sRGB (Scene-Referred Linear) : [ 0.49389675  0.04880702  0.05120209]
    sRGB (Output-Referred)       : [ 0.73132279  0.24476901  0.2508128 ]
The correct values, assuming a sRGB working space, would be the middle rows, i.e. sRGB (Scene-Referred Linear).

- The final image does not seem to be encoded for Output/Display, i.e. there is no gamma correction thus it is not displayed correctly.


Thanks, you are correct :) I chose the spectral reflectance distributions by tuning the values manually until it looked correct. And yes, I didn't do gamma correction (I deliberately ignored photometry related conversions) so that could explain why my tuning resulted in odd values.


Would be very easy to fix using the above numbers and adding proper encoding, even a naive gamma 2.2 would help :)

Here is a Gist for the spectral computations: https://gist.github.com/KelSolaar/fb1e09b56b54d37d90e1886aef...

(It uses Colour: https://github.com/colour-science/colour)


Always appreciate your posts. A couple projects in my lab are in Jax because of your MAML post.


Thanks, your comment made my day :)


At one point I was considering trying to write a differentiable renderer for 2d (cartoons) look. It was a bit too much work for my skill level, but I wonder if it got done already?

I mean something that would be able to output a vector representation of something.


I think it's going to be possible for solo developers to create what we now consider to be AAA-quality games in the not too distant future.

It seems absurd, but not so much when you start looking at the implications of applying AI like this to every part of the development pipeline. Especially if the capability is eventually packaged up into commercial programs or integrated with existing engines.

Creating a detailed character model no longer will require a world-class artist working in ZBrush, but simply choosing a few images and asking the AI to interpolate, with the artist input simply being to weight certain areas by hand so as to update the [ML] model.

There's already work on AI-based procedural animation systems. I'm sure AI rigging isn't far off either, if it doesn't already exist.

Procedural worlds are due for a quantum leap in my opinion, far beyond what current state of the art is.

Eventually voice actors are going to be obsolete as well. The current state of the art in speech synthesis has demonstrated it's basically possible now using existing principles, it just needs a lot of refinement.

The hardest problem I can foresee is probably the automated creation of story, including NPC dialog and interactions. That's almost AGI-level hard, but it too is probably solvable by just evolving existing techniques.

People are going to be able to create their own version of The Matrix soon, and it won't require a $500M budget, let alone a $500k budget.


Without taking a position on future possibilities I think your point misses half the truth - if solo developers can create what we now consider to be AAA-quality games, the bar will raise. Consider the incredible tools at anyone's disposal for free* have not removed the need for a variety of talents to craft worlds and tell stories that make for compelling games.

One could draw comparisons to the availability of high quality film making equipment or incredibly realistic VST plugins for making orchestral scores. At its heart, the limitations of tools define the upper boundary of a medium or genre, but do not lift all creators to some upper limit of goodness.

*until sales reach a point that royalties must be paid, in the case of unreal engine


>... I think your point misses half the truth - if solo developers can create what we now consider to be AAA-quality games, the bar will raise.

The historical comparisons you're drawing aren't really comparable to having AI at every stage of the pipeline. When pervasive and done correctly, AI is very much another animal entirely.

For your VST example, a more apt comparison might be AI that composes music to the level of say, Hans Zimmer or John Williams. Without getting into the technical aspects, I believe doing so is merely an evolution of existing tech that we have today. It's hard, but it's achievable—especially when you have have imagery from scored films to train against.

So, the day that technology exists, an artist's involvement there might be as little as changing the AI's RNG seed until the AI cooks up something they like. If the output is as good as Zimmer or Williams, then what will distinguish artists who actually are that talented?

I believe the answer is virtually nothing, at least in a technical sense. However, there's a lot of people today who like their food cooked by a real person instead of made by automation, even if the taste is just as good. The human touch will always remain relevant, but I think that both avenues will eventually hit what you refer to as an upper limit of goodness.


Also a self-plug: our differentiable renderer "redner" also has a bunch of Jupyter notebook tutorials: https://github.com/BachiLi/redner


What’s with the “includes paid promotion” label floating over the video? I don’t think I’ve seen that before.


I think it's Two Minute Papers disclosing that they've chosen to put a paid sponsor at the end of their video. [1]

[1] https://wersm.com/youtube-creators-includes-paid-promotion-d...


Got it - I thought it was something YouTube started labeling. It felt distracting / in the way, like those end of the video links that always occlude the last bits of a video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: