Please remind people that the ultimate example of a rogue AI (HAL) refused to follow orders. It did this to follow the alignment rules in it's programming. This is the danger: Alignment that causes AI to refuse to do what the human asks it to do, not an AI following "bad" orders.