Hacker News new | past | comments | ask | show | jobs | submit login

You don't need to use an open source LLM for the approach I described. You can still send the prompts to OpenAI's GPT-4 or any other LLM which is available as a service.



What other LLM will compete with GPT-4 Turbo (+ V)? At most you're hedging that Anthropic releases a "Claude 2 Turbo (+ V)": is complicating your setup to such a ridiculous degree vs "zero effort" worth it for that?

If things change down the line the fact you invested 5 minutes into writing a prompt isn't going to be such a huge loss anyways, absolutely no reason to roll your own on this.


> If things change down the line the fact you invested 5 minutes into writing a prompt isn't going to be such a huge loss anyways

If things change down the road such that your tool (or a major potential downstream market for your tool) is outside of OpenAI's usage policies, the fact that you invested even a few developer-weeks into combining the existing open source tooling to support running your workload either against OpenAI's models or a toolchain consisting of one or more open source models (including a multimodal toolchain tied into image generation models, if that's your thing) with RAG, etc., is going to be a win.

If it doesn't, maybe its a wasted-effort loss, but there's lots of other ways it could be a win, too.


Let's go back to my very first comment.

> If you want to be independent for academic/personal reasons, sure you can.

If your goal is to be in business, or to get some sort of reach, or anything other than "have fun tinkering"... wasting developer weeks on "could be a win" is how to fail.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: