Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Principles of Diffusion Models (arxiv.org)
226 points by Anon84 1 day ago | hide | past | favorite | 28 comments




If you're more into videos, be sure to check out Stefano Ermon's CS236 Deep Generative Models [1]. All lectures are available on YouTube [2].

[1] https://deepgenerativemodels.github.io/

[2] https://m.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6ST...


I wish Stanford kept offering CS236 but they haven't run it for two years already :(

hn question: how is this not a dupe of my days old submission (https://news.ycombinator.com/item?id=45743810) ?

It is, but dupes are allowed in some cases:

“Are reposts ok?

If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.”

https://news.ycombinator.com/newsfaq.html

Also, from the guidelines: “Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.”


I presume that email address is for when you want to ask something of Hacker New, not to ask something about Hacker News.

For example they probably didn't want posts like "Hey Hacker News, why don't you call for the revival of emacs and the elimination of all vi users?" and would rather you email them so they can ignore it, but they also don't want email messages asking "How do I italicize text in a Hacker News comments, seriously I can't remember and I would have done so earlier in this comment if I could?" and would rather you ask the community who could answer it without bothering anyone working at Y Combinator.


Are you saying this based on experience or are you projecting? In my experience (tho not asking how to italicize text using * characters) Dang and tomhow are happy to answer all sorts of questions. Sometimes they do get bogged down by the reality of running a site of this site manually, as it were, but I can't remember a question that didn't eventually get answered. I'll even tell them I vouched for this bunch of dead comments, was that the right thing to do? And one of them will write back saying mostly, but just fyi comment xyz was more flamebaity than idea, but thank you for asking and working on calibrating your vouch-o-meter.

in other words - "it is lol, also go pound sand"

What's the problem? Someone submitted it for people to read but it didn't catch on, now it's resubmitted and people can read it after all. Everyone happy. Don't be so attached to imaginary internet points.

That's not what I said, but okay.

CTRL-F: "Fokker-Planck"

> 97 matches

Ok I'll read it :)


why am I only getting 26 matches? where's the threshold then? :D

It's all about the en dashes and Fokker-Planck vs Fokker–Planck.

PDF files often break up sentences in ways that the find utility can't follow, so even if they ask have the same dash, it might not find them all. At least those names are uncommon enough you could search for just one.

AI is definitely related to dashes!!

Is there something equivalent in scope and comprehensiveness for transformers?

Reading this reinforces that a lot of what makes up current "AI" is brute forcing and not actually intelligent or thoughtful. Although I suppose our meat-minds could also be brute-forcing everything throughout our entire lives, and consciousness is like a chat prompt sitting on top of the machinery of the mind. But artificial intelligence will always be just as soulless and unfulfilling as artificial flavors.

Guessing you’re a physicist based on the name. You don’t think automatically doing RG flow in reverse has beauty to it?

There’s a lot of “force” in statistics, but that force relies on pretty deep structures and choices.


Are you familiar with the "Bitter Lesson" by recent Turing Award winner Rich Sutton? http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Always is a long time. It may get better.

Intelligence is the manifold that these brute-force algorithms learn.

Of course we don’t brute-force this in our lifetime. Evolution encoded the coarse structure of the manifold over billions of years. And then encoded a hyper-compressed meta-learning algorithm into primates across millions of years.


Learning a manifold is not intelligence as it lacks the reasoning part.

Leaning the manifold is understanding. Reasoning, which takes place on the manifold, is applying that understanding.

I am not sure what you definition of "understanding" is that you apply here.

I mean understanding physics and the universe of natural possibilities; what can happen. Then comes why.

Fitting a manifold to a bunch of samples does not allow you to understand what can happen in the universe. For example, if you train a regular diffusion model on correct sudokus, it will produce sudokus with errors because it does not understand the rules.

i m scared by the maths

Are you sure you're not scated?

470 pages?!?!?!? FML! :-D



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: