On the other hand, Google has been on the record saying they won't develop military AI, that an autonomous weapons race is dangerous and unwanted, and that they want to organize the world's information to keep it accessible to anyone.
Microsoft employees knew what they signed up for and have moral authority to make that decision. Google employees thought they signed up to "do no evil" and their efforts were used by a secret project that violates all but one of their safe AI guidelines (the one about delivering technical excellence).
I myself am a bit more agnostic/apathetic about creating technology that could be used for bad: I just want to create and focus on the best possible (gun turret, data mining, missile tech), that will deliver on its promise of what it said out to do with utmost accuracy and robustness. That's why I find it really important to work for companies and governments that I can trust to not abuse my technology once it is out of my hands, and are transparent about its usage. But I don't find any damage to my soul if the decision makers make an evil usage decision: That is fully on them. I am not going to handicap or refuse to work on something interesting, because their morals are out of whack.
On the other hand, Google has been on the record saying they won't develop military AI, that an autonomous weapons race is dangerous and unwanted, and that they want to organize the world's information to keep it accessible to anyone.
Microsoft employees knew what they signed up for and have moral authority to make that decision. Google employees thought they signed up to "do no evil" and their efforts were used by a secret project that violates all but one of their safe AI guidelines (the one about delivering technical excellence).
I myself am a bit more agnostic/apathetic about creating technology that could be used for bad: I just want to create and focus on the best possible (gun turret, data mining, missile tech), that will deliver on its promise of what it said out to do with utmost accuracy and robustness. That's why I find it really important to work for companies and governments that I can trust to not abuse my technology once it is out of my hands, and are transparent about its usage. But I don't find any damage to my soul if the decision makers make an evil usage decision: That is fully on them. I am not going to handicap or refuse to work on something interesting, because their morals are out of whack.