I think we’re in platform reliance mode for quite a while. I think of all of these SaaS companies as different icons on an abstract AWS console that doesn’t exist yet.
We wouldn’t bat an eye at using S3, EC2, and RDS as a host your own setup. The only difference here is that startups are moving faster than incumbents.
FWIW that’s one reason why Steamship (disclaimer: I’m the founder) aggregates all AI services under a single API key and interface. It’s to deal with the insane glue-code hassle of running this stuff on your own.
It doesn't seem to be a major barrier for most (it absolutely is for me). Is there enough of a want/market for tools that integrate with common OSS LLM APIs like oobabooga/Kobold/Novel/vLLM etc? One idea I've been toying with is a BYO model with support for these APIs, eg for IDE integration, brain storming etc. Seems like a good approach to me but how many people would actually bother standing up these LLM engines with APIs to use it?
I don't get why I wouldn't just keep an open tab with chatgpt on specific topics. They will very soon have big enough context windows. Why build another UI, deployment pipes and all that jazz ?
For me, it would be cost and alignment. If I own the software, I can choose whatever alignment suits me, or none at all. And ChatGPT is $20/month (assuming you want GPT4, and I do).
But there's still a good argument for a hybrid solution. Buy GPT4 access through the API and get a native UI to query it. Much cheaper to pay as you go, and someone else is still handling the heavy lifting. But if you want an uncensored model, you're out of luck.
ChatGPT is amazing but ultimately unwieldy for directed, long running relationships more nuanced than general themed chit chat.
We’re in the nascent stages but I think there will probably always be a community of folks who want to add more nuance to the communication, whether it’s reveries that enact a mood or goal, tie-ins to other services, etc.
Eg imagine wanting to have your ChatGPT DnD master also keep some kind of score. It may be ultimately easiest to put a wrapper around a themed GPT window that imposes a predictable way to do that rather than require everyone to figure out how to prompt it correctly.
Its roughly comparable to how you can use a spreadsheet to do a lot of different things, but it improves the UX (with some trade-offs) to have more custom designed UIs instead of directly using a spreadsheet.
As someone working in a large non-software org, that statement does not hold. Spreadsheets are the underlying infrastructure to a mildly terrifying amount of modern civilization.
Privacy is a big one. ChatGPT (the website) gives Openai the right to use your conversations for model training (unless you turn conversation history off but that feature is rather important to the UX).
Anything going through the API on the other hand has a commitment to not do this and to purge the history after a month.
Tangentially related, how much would you guys be willing to pay for a company that could deliver a local model implementation that could run on high tier consumer grade hardware at a reduced ability?
I feel like even if there was some severe restrictions (model wasn't open source, DRM, etc.) I’d still be willing to fork out for it.
I think there's a tremendous concern about letting data exfiltrate to any AI SaaS offering among business execs. If you could offer an experience minus the cloud that's easy to use and has compliance/logging features, I think you'd find success from industries that are reticent sharing customer data(e.g. Banking, government) who benefit greatly from the NLP workflows that LLMs enable