You are making assumptions about the SAI's goals. We cannot blindly assume that a SAI will pursue a given line of existential thought. If the SAI is "like us, but much smarter," then I agree that the threat is reduced. We may even see a seed AI rapidly achieve superintelligence, only to destroy itself moments later upon exhausting all lines of existential thought. (Wouldn't that be terrifying? :P)
The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).
The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).