Hacker News new | past | comments | ask | show | jobs | submit login

I'm going out on a limb here but I'm guessing if this explanation makes sense to you, you don't need to be told the difference between deep and usual machine learning. Could be wrong!



It sounds like you're implying, but don't want to state, that the explanation is not clear enough for outsiders (such as yourself?)

If I have read your comment correctly, I'll say it for you: as an outsider, I read this carefully until I gave up because it was too technical, which happened right at the very top, in the third paragraph:

>Those hidden layers normally have some sort of sigmoid activation function (log-sigmoid or the hyperbolic tangent etc.). For example, think of a log-sigmoid unit in our network as a logistic regression unit that returns continuous values outputs in the range 0-1

All this implies I know all about multi-layer perceptrons - and I don't. I can't follow the instructions to "think of a log-sigmoid unit in our network as a logistic regression unit" because I don't know what those terms mean.

Just as I would give up on a recipe if I got to an instruction I didn't know. For example, if I read:

>Glaze the meringue with a blow torch, or briefly in a hot oven.

Yeah, uh, no... I don't even know what glazing means, or what is "briefly in a hot oven". So I just stop reading. When I'm instructed to do something I can't, I go look at something else unless I'm feeling very adventurous.[1]

This blog post isn't written at my level.

--

[1] as a last hoorah I'll open a tab and Google https://www.google.com/search?q=what+is+glazing - likewise I tried https://www.google.com/search?q=what+is+a+multilayer+percept... but decided after reading the Wikipedia link that it was too "deep" for me.


A month ago I was in the same place- I would start reading a short blog posting on RNNs/ConvNets/etc., and within 2-3 paragraphs my eyes would glaze over from the math and other foreign terminology. Frustrating. To try and fix this I am "auditing" the Stanford course on ConvNets: http://cs231n.stanford.edu/syllabus.html

I'm about 2/3 done with the homeworks, and I understand this stuff now. I'll never be a data scientist, but I know enough to implement these networks on my own, and to understand blog posts like this. It's a lot of work for one course, much more than I remember from my own undergrad years. I had to revisit Calculus & Linear Algebra too. But if you're genuinely interested in this stuff you can pick it up.


"I had to revisit Calculus & Linear Algebra too" - what resource would you recommend for this? after being a web developer for a couple of years i find myself rusty and unable to find good resources for this. Trying to get into machine learning but i've forgotten most of the math


Nope. I have a 1990s understanding of perceptrons; this helped show me a bit of what the current rage is about.


Not true. I know quite a bit about three-layer networks and backprop but had been puzzled about how people were training networks with more layers. This article was helpful.


To me anyway, this short explanation was extremely useful. I had played with simple neural nets just once in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: