Hacker News new | past | comments | ask | show | jobs | submit login

You mention safety as #1, but my impression is that Google has taken a uniquely primitive approach to safety with many of their models. Instead of influencing the weights of the core model, they check core model outputs with a tiny and much less competent “safety model”. This approach leads to things like a text-to-image model that refuses to output images when a user asks to generate “a picture of a child playing hopscotch in front of their school, shot with a Sony A1 at 200 mm, f2.8”. Gemini has similar issue: it will stop mid-sentence, erase its entire response and then claim that something is likely offensive and it can’t continue.

The whole paradigm should change. If you are indeed responsible for developer tools, I would hope that you’re activity leveraging Claude 3.5 Sonnet and o1-preview.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: