Hacker News new | past | comments | ask | show | jobs | submit login

> It doesn't _want_ anything, but humans want to anthropomorphise it.

I fully agree with you on anthropomorphization, but it's the humans who will deploy it to positions of power I am worried about: ChatGPT may not want anything, but being autocomplete-on-steroids, it gives its best approximation of a human and that fiction may end up exhibiting some very human characteristics[1] (PRNG + weights from the training data). I don't think there can ever be enough guardrails to completely stamp-out the human fallibility that seeps into the model from the training data.

A system is what it does: it doesn't need to really feel jealousy, rage, pettiness, grudges or guilt in order to exhibit a simulacrum of those behaviors. The bright side is that, it will be humans who will (or will not) put AI systems in positions to give effect to its dictates; the downside is I strongly suspect humans (and companies) will do that to make a bit more money.

1. Nevermind hallucinations, whixh I guess is the fictional human dreamed up by the machine having mini psychotic-breaks. It sounds very Lovecraftian, with AI standing in for the Old Ones




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: