Hacker News new | past | comments | ask | show | jobs | submit login

This is an internal debate I've had with myself since reading Bernard William's Objections to Utilitarianism [0].

I've never had to actually face the ethical dilemma of developing weapons, but what if the development improved precision on a missile? If we can ignore the question as to whether a missile is ethical or not, developing a better guidance system for a missile will help limit collateral damage, but could increase the "comfort-level" of using the weapon for those who decide such things, therefore increasing overall death/destruction. Utilitarianism is hard, because taking all factors into account is impossible. Kind of like machine learning.

I will never scoff at someone who turns down work for ethical objections, but some people are more pragmatic than others.

Both of Williams examples are really hard to wrap your head around if you accept the situations as presented. They are similar to a Sophies Choice [1]

[0]: http://plato.stanford.edu/entries/williams-bernard/#Day [1]: http://en.wikipedia.org/wiki/Sophie%27s_Choice_(novel)




This is a divergence, but it seems to ignore the value that comes from propagating the meme that building armament systems is unethical by refusing to participate in it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: