biggest barrier to this is the hardware requirements. I saw an estimate on r/machinelearning that based on the parameter count, gpt-3 needs around 350GB of VRAM. maybe you could cut that in half, or even one-eighth if someone figures out some crazy quantization scheme, but it's still firmly outside of the realm of consumer hardware right now.
The biggest I've found is GPT-J (EleutherAI/gpt-j-6B), which has a model size comparable to GPT-3 Curie, but the outputs have been very weak compared to what I'm seeing people do with GPT-3 Da Vinci. The outputs feel like GPT-2 quality. I'm probably using it wrong, or maybe there are better BART models published that I don't know about?
> Write a brief post explaining how GPT-J is as capable as GPT-3 Curie and GPT-2, but not as good as GPT-3 Da Vinci. GPT-J ia a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. sentence repeats
The existing models aren't fine tuned for question answering, which is what makes GPT-3 usable. Eleuther or one of those other Stability collectives is working on one.
It's very sad how they had to nerf the model (AIDungeon and stuff). I don't think anything on a personal / consumer GPU could rival a really big model.