Hacker News new | past | comments | ask | show | jobs | submit login

...and that's just GPT-2_medium.



I actually thought snazz had forgotten to include the generated snippet. However, I didn't understand the reasoning in the second generated paragraph and stopped reading. At that point, I still thought they were human (and wrong).

I suppose my question is why it matters who wrote it. I've always been taught that an argument should be judged on its own merits, and from that perspective nothing changes.


Well, if the blog post actually had been written by GPT-2, its very existence would be a counterexample to one of its main claims, that GPT-2 isn’t really good enough to generate convincing long-form nonfiction.

Also, that part of the argument is not purely a priori, instead being supported by a wide range of factual claims about specific limitations of GPT-2. If the post had been written by GPT-2, those claims would probably be false, since GPT-2 is not designed to differentiate truth from plausible-sounding fiction. And false claims would invalidate the whole argument. Assuming a human author, on the other hand, the claims are probably true. They could be false if the author was either misinformed or lying, but those possibilities are subjectively unlikely.


> Assuming a human author, on the other hand, the claims are probably true.

Maybe this is where we differ? I don't agree. People are mistaken all the time. Your hypothetical even assumes an untrustworthy human is directing the algorithm.

To me, the most convincing point in favour of the truth of the claims in the article is that nobody has contested them. The claims appear to be easily falsifiable, so the more scrutiny they withstand, the more trust they deserve.


Funny your question is kinda addressed by the generated text:

'..."what it would feel like to be" a human being is no more human than a machine trying to "feel" what it is like.'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: