ChatGPT currently struggled to identify my past hackernews comments as human generated. 19/20 were labeled "unclear" and 1/20 labeled "possibly". Additionally, the OpenAI classifier required a minimum of 1,000 characters. It's reasonable to assume that anything shorter would be impossible to detect confidently regardless of classifier model.
OpenAI has shared their intent to watermark the output but that doesn't make a difference if you paraphrase/reword the output. I don't think Google will be able to even if it wanted to, results still struggle with regular spam. Google will also likely start using AI generated output in the rich search results/cards.
Nope. Chatgpt is one thing you get to use. That chatgpt being sold to devs is completely an different story. You can train chatgpt on books to *argue like a person, not like a General Polite and Admiral Know-it-all.
And.
The paradox that you need a certain knowledge in the domain to be able to recognize false information. Other parameters of a LLM can be set almost freely with the API access for 20 bucks / mo.
That's the "fine tuning". No chance for a LLM to recognize an other LLM. It may work out with a general AI or with an AI that can hook up and check on facts.
OpenAI has shared their intent to watermark the output but that doesn't make a difference if you paraphrase/reword the output. I don't think Google will be able to even if it wanted to, results still struggle with regular spam. Google will also likely start using AI generated output in the rich search results/cards.
https://platform.openai.com/ai-text-classifier