Hacker News new | past | comments | ask | show | jobs | submit login

Could be. Also, as you imply, they'd have to loosen the regularization penalty on θ, and maybe it's difficult to loosen it such that it won't become too prone to overfitting.

Maybe their current setup of keeping θ "dumb" encourages the neural network to take on the role of the "algorithm" as opposed to the higher-variance input encoded by z (the puzzle), though this separation seems fuzzy to me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: