Hacker News new | past | comments | ask | show | jobs | submit login

> Machine learning often boils down to the art of developing an intuition for where something went wrong (or could work better) when there are many dimensions of things that could go wrong (or work better).

I'm not a practitioner, but I always thought this was the main challenge. Uses of ML are rarely "right" or "wrong" per se, but they rely on intuition to get a model that "works" in a practical sense.

There is no royal way to machine learning: you can't decide you are going to make an algorithm that detects bad comments (as determined by human consensus) and then just go make an implementation that you can reason out to be correct, the way you could prove a graph algorithm correct. Trial-and-error and hard-to-transcribe intuition are baked into the process.

(I'd love to get some insider insight on this comment!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: