I get it, we're a long way from artificial general intelligence, but most of us will agree that its coming at some point.
Pure legislation will never be universally agreed to or enforced in the form of "thou shalt not build AI that could hurt people" partially because that kind of global enforcement is impossible and partially because defining AI that could hurt people is so hard - given the job to "keep this room clean" and enough agency to ensure it does its job, your automatic vacuum cleaner could kill you if it discerns that you're what keeps causing the mess.
Does some kind of Asimov's Laws need to be developed at the chip level? At least there are only a few capable chip makers and you could police them to ensure no chip was capable of powering an AI capable of doing harm...
EDIT: I spend so much time thinking about this - why isn't this kind of discussion front page on HN regularly?
I think we can make AI that is 'intelligent' but has no personality or 'self'. An oracle machine you can ask any question of, but it's not an evil genie looking to escape and take over the universe, because it is not a person, and has no drives of its own.
Consider how we have recently made an AI that can defeat the best humans at Go. Even 10 years ago, this was thought to be impossible for some time to come. "Go is a complicated game, too big to calculate, requiring a mix of strategy and subtlety that machines won't be able to match". Nope.
Now, AlphaGo can defeat the best humans, with a 'subtlety' and 'nuance' that can't be matched. But it is not a person.
We might be able to do the same in other areas.
Note that games like chess and go are sometimes played as 'cyborg' competitions now, where the human players are allowed to consult with computers. Imagine if the Supreme Court were still headed by the human judges we have today, but they consulted with soulless machines that have no drives of their own, that can provide arguments and insight that humans can't match. Imagine if, in addition to the human judges written opinions, there were a bevy of non-voting opinions 'written' by AIs like this. Or if every court case in the world had automatic amicus briefs provided by incredibly sophisticated legal savants with no personality or skin in the game.
Note that several moves that AlphaGo played were complete surprises. We have thousands of people observing these matches, people who have devoted their whole lives to studying the subtleties of this complex game. There are less than 361 choices for where to move next. And AlphaGo plays a move that nobody had seriously considered, but, once played, the experts realize we've lost the match. That is really remarkable.
I think this future (non-person intelligent helpers) is definitely possible. But it doesn't solve the problem of 'evil' humans building an AI that is a person who agrees with their evil beliefs. I don't have an answer for that.