Hacker News new | past | comments | ask | show | jobs | submit login

> Health issues aren’t addressed with accurate information, they’re addressed by understanding the needs of the individual. Even if GPT-4 could guarantee accuracy when discussing self-harm, that would not necessarily be the right answer from the perspective of ensuring GPT-4 does the most amount of good.

That argument could be used for removing most health information from the internet, restricing books on the topic to people with a medical license, etc.

I agree that ideally any chatbot built on top of GPT-4 should do more, like asking further questions, following up in later conversations etc. And as others have pointed out, GPT itself should point out even better methods to satisfy the expressed immediate need (ice cubes instead of cutting). But saying "Sorry dave, I can't do that. Ask someone else." doesn't sound like the right approach.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: