Hacker News new | past | comments | ask | show | jobs | submit login

That’s interesting. What’s the more real network than the one with back propagation?



Rather, the first one was simply a single node linear perceptron (a linear function) that I trained with backprop because I could, even though there are better techniques for fitting a linear function. Now that it's a "real" network, backprop is appropriate.


Makes sense. Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: