Hacker News new | past | comments | ask | show | jobs | submit login

TLDR:

Because it gives more weight to one big error then to multiple small ones with the same sum.

We want the errors to be noise and not systematic. Noise usually has a gaussian distribution. And in a gaussian distribution multiple small values are more likely than one big one.




An example:

Imagine these two predictors:

    Reality: 1 1 1 1 1 9 1 1 1 1
    Predic1: 2 2 2 2 2 2 2 2 2 2
    Predic2: 3 3 3 3 3 6 3 3 3 3

    SumOfErrors(Predic1) is 16
    SumOfErrors(Predic2) is 21
So Predic1 was better then Predic2? No. Because correctly predicting the one outlier shows more predictive power then staying close to the average. Therefore we use SumOfSquerrors:

    SumOfSquerrors(Predic1) is 58
    SumOfSquerrors(Predic2) is 45
This shows that Predic2 is "better" and we are happy :)


It should be 16 & 21 and 58 & 45.


True. Fixed. Thanks.


But what I've never understood is that if your objective is to magnify errors, why not cube it? Why not to a greater power still? If the other benefit is that all negative values to an even power become positive, then why not take the absolute value of the cube? No matter what, the degree to which we magnify errors strikes me as arbitrary.


> Noise usually has a gaussian distribution

This belief is often a good indicator that a data scientist is divorced from reality.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: