Hacker News new | past | comments | ask | show | jobs | submit login

> But back to the topic, if one side is using ML to ultimately kill fewer civilians then this is a bad example against using ML.

Depends on how that ML was trained and how well its engineers can explain and understand how its outputs are derived from its inputs. LLM’s are notoriously hard to trace and explain.




I don't disagree.


Also the military is notorious for ignoring requests by scientists, for example to not use the nuclear bomb as a weapon of war.

https://en.m.wikipedia.org/wiki/Szil%C3%A1rd_petition

So the developers may program the AI to be careful, but the military has the final word on deciding if the AI is set on safety or agressiveness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: