Hacker News new | past | comments | ask | show | jobs | submit login

Your question is certainly interesting in an epistemological sense, in a way it is the basis of relativism.

Having read Larry Laudan recently however, I'm a big fan of his pragmatism which trumps the question a bit: just use whatever works.

In this case, we don't all need to be aware of loss aversion, and really I suspect barely anyone was using it in a practical sense. A pragmatist might say that it was only ever a theory, and this is why it thrived in largely theoretical exercises (or in the case of economics, in a context where the tangible consequences were extremely far removed from the application of the theory). But to me loss aversion still seems like an observation after the fact, though I did believe it at the time, which are the worst kind of observations ;)

In short, from where I stand there is no such thing as "correctness", only what has been successfully applied and in what context, or temporary applicability if you will. Any further interpretation is usually a case of extrapolating knowledge from a vastly incomplete picture. Being a psychology student, I look at its early history as a tragic example of why this is counterproductive. Some habits are hard to break though.




So what if the "certain topic" is something with no immediately obvious thing that "works", e.g. climate change?


I'm afraid I don't know enough to talk about it, but to me it seems more of a collection of observations than a "theory" in the speculative sense. As far as I'm aware though, we have observed that cities with less greenhouse gases tend to be colder, so in that sense lowering CO2 levels "works" to reduce temperature, though we can't speak of global "correctness" until we manage the same with the global temperature.

This is a special case however, in that we have to assume correctness because otherwise we'll all be much worse off, and the possible cost of reducing pollution are slim in comparison. But if anything I think this supports my point that correctness itself doesn't matter, only the material consequences do.


But if only potential consequences matter regardless of the odds, then you get to the problem of Pascal's Wager [0]: it's best to assume God exists, because the consequences of being wrong and not doing that (going to hell) are far higher than the consequences of being right and doing it.

[0] https://en.wikipedia.org/wiki/Pascal%27s_Wager


Pascal's wager is mainly faulty because it relies on a complete lack of information, unlike climate change where we have some information. If we had any clues at all that any existing gods are benevolent, it would most definitely be the right choice from a pragmatic perspective. If we circle back to climate change as an example, it's hard to be certain but that's still a lot of clues telling us we should do something.

I assume you're not headed into religious debate territory but I don't imagine pragmatists focus much on metaphysical matters (I know I don't).


Right, so how about AI then? There's a very small risk that AI would become extremely malicious and wipe out the entire race. In other words, the risk is infinite. Since there is some (albeit very little) information that this might happen, should we spend all our resources on preventing that?


I don't follow?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: