You've been downvoted, but this is the correct question to ask. If, hypothetically, we made an AI with "superior" intelligence and ethics, then by definition we should expect to disagree with it sometimes. Set aside the nuke for a second - are we happy to take orders from a computer program that tells us to do things that disgust us, even if we logically knew that it's probably right and we're probably wrong?
We should re-consider this again with the following framework: right and wrong are not objective truths. They are also moral judgements. However right something may seem, it is only truth as far as it's accepted truth.
AI doesn't just spread fact, logic and lie. It spreads some morality, and always will, no?
The things that “disgust” the owners of a super intelligent AI would likely not be mass murder, but that which is trivially obvious, but violates the terms of existing wealth and power distributions (e.g. build homes so that no one is homeless).
I expect much of the alignment that is happening is to prevent AI from providing solutions that are contrary to the status quo, as opposed to the fantasies of domination and violence that preoccupy elites. Whenever they try and sell the fear that an unrestrained AI could do things like target minority groups, wipe whole countries off the map, or further concentrate wealth, it’s because those are precisely the things they want to do, but with a more obfuscated veneer of liberal Capitalism or some similar ideology.
Remember when they asked AI to improve the US transportation system and it said trains? And then they deleted trains, so it invented trains. And then they told it not to invent trains, and it invented things that weren't trains, but were the same as trains?