Hacker News new | past | comments | ask | show | jobs | submit login

> I was happy to assume it was good faith until Altman started trying to engage in regulatory capture to protect his competitive moat

I've not seen him do that, just a lot of people saying that's the only thing they can believe he must have meant.

What I've seen in his comments, in the original transcripts, is basically "regulate GPT-4 and better, don't bother regulating anything smaller or simpler than that, don't regulate open source models".

> You are correct in saying we should pursue lawfare against OpenAI to make sure such blatant and widescale theft can't happen again. I believe you are incorrect in saying it's a bad idea to produce more models.

Contradictory position. To produce more models based on one you characterise to be "theft" is to actually make it happen again.

Making new models from the output of their models is one of three ways I can see of doing this, along with court ordering their models be published (rather than destroyed), or some other company retraining from scratch on their own crawl of the web. This third option is also why I think anyone using the "moat" metaphor with regard to OpenAI's models needs to stop and think about how they're acting like a stochastic parrot.




Oh, just to be clear, I have no problems with models built on OSINT. My problem is someone doing that, building a commercial product on it, and then telling others they can't do it to them, while also using the money they get from the endeavor to make the world worse for everyone else.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: