Hacker News new | past | comments | ask | show | jobs | submit login

False positives do happen, but the style of the article raises some "crank" red flags.

- Doesn't get to the point until about halfway through

- Repeatedly mentions Einstein

- Appeals to quantum mechanics out of nowhere

The style is particularly hard to parse (or I'm particularly dense, but I am generally comfortable reading papers on variational inference, neural networks, etc). At the same time, a lot of it rings true-ish... Does anyone get where they're going with this?




Just to be abundantly clear - the ladder network [1] outscores this by a moderate margin, and is also actually SOTA without data augmentation. In my mind at least, adding data rotations in the latent space is still different than a fully connected model without data augmentation.

I could do with a few more recent citations on generative modeling. It seems the author isn't 100% aware of some of the most recent generative modeling work.

That said, the ideas presented are interesting and seem complimentary to lots of existing approaches - I will be looking into this paper further.

[1] http://arxiv.org/abs/1507.02672


Guy, the only thing abundantly clear is that you are full of dung. Your shameless self-promotion may piss-off the police chief here - murbard2, so be more careful. Ten digits better classified, out of 10000? With a structure more complicated than a human DNA vs two lines of code? Congratulations! Or, and how are your "stairways-to-heaven" even remotely universal? Show us something these networks have generated like the VAE or Gibbs/ACE papers? Perhaps you can show some density estimation results as in http://arxiv.org/abs/1502.04623 or http://arxiv.org/abs/1508.06585? Oja and Hyvarinen are great guys and have left their names in the pantheon of neural nets. But it is time for you and the other 12 people who live there, to shake off the legacy of ICA and the obsession with orthogonality: Andrew Ng and company showed years ago that it is not needed and is in fact detrimental http://ai.stanford.edu/~quocle/LeKarpenkoNgiamNg.pdf . Read it! Also, too much dung smells bad in the arctic summer, murbard2 here prefers comics and won't be reading your installments of 25+ page spaghetti any time soon. At least Oja and Hyvarinen know how to write.


2. he is explaining stochastic nets as statistics non-equilibrium systems, in analogy with theory of fluctuations, which Einstein allegedly originated 3. drawing analogies with quantum mechanics (wave function = conditional density) can open the floodgates for applying a number of quantum techniques to nets


Are you the original author? Could you detail a little more clearly the structure of the network and the training procedure?


Nah, it doesn't seem crankish to me. Also, https://scholar.google.com/citations?user=9UJmm_AAAAAJ&hl=en...


There's a bunch of phrasing that makes me want to give it the stinkeye, but nothing horrible. This isn't my field, but from part 1 it looks like he's trying to replace a Gaussian distribution by a Laplacian one and exploit some kind of underlying symmetry, but that doesn't really mesh with his "conclusions" in part 5.


Replace which Gaussian density from which model exactly?


MNIST dataset is not a difficult dataset, but it's a good start


> MNIST dataset is not a difficult dataset

oh how far we have come




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: