Hacker News new | past | comments | ask | show | jobs | submit login

Didn't OpenAI start with these same goals in mind?



Yes, it would be nice to see what organizational roadblocks they're putting in place to avoid an OpenAI repeat. OpenAI took a pretty decent swing at a believable setup, better than I would have expected, and it failed when it was called upon.

I don't want to pre-judge before seeing what they'll come up with, but the notice doesn't fill me with a lot of hope, given how it is already starting with the idea that anything getting in the way of raw research output is useless overhead. That's great until somebody has to make a call that one route to safety isn't going to work, and they'll have to start over with something less favored, sunk costs be damned. Then you're immediately back into monkey brain land.

Or said otherwise: if I only judged from the announcement, I would conclude that the eventual success of the safety portion of the mission is wholly dependent on everyone hired being in 100% agreement with the founders' principles and values with respect to AI and safety. People around here typically say something like "great, but it ain't gonna scale" for things like that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: