Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dropbox also launched in a non-collapsed market. OP has a cool product, but the comment is right - how wide is their moat here?

The biggest lesson to learn from Dropbox is that if you don't build a unique enough product, larger companies will eat your lunch. Microsoft already has Copilot patents, if Second ever threatens them then they have full control over the economics of GPT. Their business is built on the goodwill of a direct competitor.



OpenAI has the GPT models, not Microsoft. Also the open source models like BLOOM are getting better over time and when OpenAI pulls the rug, maybe they'll be competitive.


> OpenAI has the GPT models, not Microsoft.

Haha no. There is no future where OpenAI operates independently of Microsoft. OpenAI is functionally beholden to MS. 49% is a paper contrivance to present a facade that they're still in control. They're not. The tail is wagging the dog, and in this case the tail is a trillion dollar dragon that OpenAI lowered the drawbridge and invited inside.


Why do you say that?


The only way any MS decision is not going to go through is if all other shareholders are unanimously against it.

It seems like a stretch to say that would ever be true.


Because it is true.

At this point, Microsoft has partially acquired OpenAI and is extending their exclusivity deals to use their offerings.

But the only way to compete against them is with open source AI models that are good enough substitutes for OpenAI’s offerings.

With Stable Diffusion being open sourced, it completely ruined monetising options for DALLE-2 and threatened OpenAI to the point where they rushed out ChatGPT.

The same thing happened with GPT-3 and is going to happen again to ChatGPT and we will quickly see an open-source equivalent.


I've been looking around at open source offerings that can run on "affordable" hardware. Right now you can run something like GPT-J on 4 GTX 3090s, which will cost you anywhere between 6k and 8k not including the rest of the PC.

Once functional models get small enough, I think we'll see an absolute explosion in AI as anyone with a laptop will be able to freely tinker with their models.


You can run GPT-J on an M1 or Pi if you want, you'll just have to settle for a pruned model. No self-hosted options (besides maybe Facebook's leaked thing) can stand up to ChatGPT's consistency or size. I've also been playing with this and have had great results for non-interactive use on even the smallest models (125m params, ~2gb RAM). The problem as I see it won't be inference time/acceleration as much as it will be having enough memory to load the model, and settling for lower quality answers. ChatGPT is already pretty delusional, and pruning it doesn't make it any smarter. You can practically feel the missing connections in the quality of response.

So, big takeaway: AI text generation is legible and quick with smaller models, but "intelligence" a-la ChatGPT seems to scale directly with memory.


Could we do two models that are different levels of abstraction? One for "ideas" and one for compressing the idea into words? I've been thinking that smaller specialized networks might boost space efficiency.

I've no clue how these would be wired together, but I have done ideas.


Have you seen the latest Llama and Alpaca models?


Fingers crossed. I don't have much hope though. It feels like everyone has said that there's plenty of incentive to knock Nvidia's block off with this AI business, but we've yet to see any real competition.

My guess is that the profitability of ChatGPT is configurable enough to compete with SOTA competitors. Even if a better model crops up, out-pricing OpenAI will be a struggle. It might even require a hosting partnership with AWS or Azure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: