Ditto, this seems much more coherent than previous GPT-3 output. Have people gotten better at prompting / selecting output? Or is this a fake GPT-3 output written by a human?
I have been trying out different prompts for a week now and this seems plausibly written by GPT-3. There's quite a bit of luck involved since each time you submit the same prompt you can get a different response. Some prompts produce garbage output and some good prompts produce a range of outputs from lame to amazing. There's quite of bit of cherry picking and selection bias involved. No one publishes the uninteresting responses and you don't read or comment on them. Still, I think this is all quite amazing and it seems like it's close to ready for commercialization.
Quick, someone train an AI on labeled GPT-3 outputs. They can be labeled as "good" "ok" "bad" "bad grammar" "convincing argument" etc...
There's a website, scribophile.com, for crowdsourced literary criticism. We should make something similar but for AI training. The key difference is that the critique of AI output would be more structured (in the form of labels you can apply to sentences, words, paragraphs, etc...that you highlight) than unstructured.
Another website that is more structured than scribophile but is used for crowdsourced photo critiques instead is photofeeler.com. An interesting thing they do is that if someone is consistently "harsh" or "lenient" on photos, as measured by their standard deviation from the average rating, their feedback is adjusted accordingly. My email is in my bio for collabs.
Note: No one should dare submit GPT-3 content to scribophile. It is a beautiful, sacred, and fragile place for humans only.