Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This implementation is similar to something Ilya Sutskever said a few months ago but I think I am misunderstanding both: I think they are saying robots could learn how to move and what facial expressions to use by watching millions of hours of videos involving humans, a sort of LLM of human behavior. I am not a scientist so I may have this wrong.


Not that controversial. Just need to map it to the controls correctly. The experience from others can show what a human would do. There needs to be a layer of figuring out how to achieve that outcome with whatever tools are on hand




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: