Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is awesome! What kind of bias would we expect to see in the training set?


For example if it consumes lots of data that uses the pronoun "he" for doctor, it's much more likely to spit out "he" in a medical context.

Since the model lacks world knowledge for new words, sometimes it also ascribes words to a specific cultural origin / group that is completely wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: