It's still BS. Outliers are a signal that you don't have a simple, nicely decaying distribution.
The right way to deal with outliers is to use a method that acknowledges their existence, not to ignore them. For example, if outliers destroy your OLS linear regression, it's because your error is not normal. That means you need to do Bayesian linear regression with a non-normal error term, not just throw them away.
Depends. Throwing outliers out without thinking is obviously wrong. In many instances outliers can be just invalid measurements and you should ignore them.
> In many instances outliers can be just invalid measurements and you should ignore them.
signal[i] = value[i] + noise[i].
If you know that value[i] == NaN, then by all means throw out signal[i]. If value[i] != NaN, then you're better off modeling error[i], and using that model to give you information about value[i] as yummyfajitas suggests.
This is trivial to see if noise[i] == 0, but for some reason becomes progressively harder for people as noise[i] increases.
The right way to deal with outliers is to use a method that acknowledges their existence, not to ignore them. For example, if outliers destroy your OLS linear regression, it's because your error is not normal. That means you need to do Bayesian linear regression with a non-normal error term, not just throw them away.