For example, Star Trek did an episode where an entire civilization's golden age utopia relied on a single child sacrifice annually. There's no single right answer to that situation; it's an impossible choice. Intentionally kill a kid, or intentionally collapse an entire civilization into war, starvation, and megadeath, but less directly.
We humans can't agree on what's ethical in situations like these. There's good arguments for "never, ever kill an innocent or you lose your way" and there's good arguments for "killing 10 innocents to save 1,000,000 is the right choice". It's possible an AI will make better choices. It's possible we won't like them. It's possible an AI will make shitty choices. It's possible we'll love those.
For example, Star Trek did an episode where an entire civilization's golden age utopia relied on a single child sacrifice annually. There's no single right answer to that situation; it's an impossible choice. Intentionally kill a kid, or intentionally collapse an entire civilization into war, starvation, and megadeath, but less directly.
We humans can't agree on what's ethical in situations like these. There's good arguments for "never, ever kill an innocent or you lose your way" and there's good arguments for "killing 10 innocents to save 1,000,000 is the right choice". It's possible an AI will make better choices. It's possible we won't like them. It's possible an AI will make shitty choices. It's possible we'll love those.