No, the ending seems completely in line with what the model does well. What is impressive about GPT3 is that it seems good at keeping track of really long-term dependencies in long sequences of text, giving the impression of being able to hold on to trains of thought, a problem that's plagued generative natural language models for quite a while. I suspect that this is part of why people who have been working in or following that field for a while hype it so much. It's a pretty mechanical result when I put it like that, but it's also something that used to make AI-generated writing extremely obvious, and now that one cue is less likely to give it away
It's also quite good at coming up with some good plausible items to continue a list with, which actually gets it into trouble in terms of sounding like human writing sometimes, like in the absurdly long list of body parts featured in one generative example
So that's why I think it was a pretty plausible generated ending, though of course it's curated here, so it might not have been the first thing the model came up with. The concept of "existing" as a verb had been developed multiple times in the essay as something the author felt she had lost when her sister died, and the thought that the author felt she didn't exist was often tightly associated with thinking about the loss directly in a way that's quantifiable without that sophisticated of a language model, so along with math, which makes sense if your list is of life skills broadly that you could learn from someone, the model bringing up "existing" when there was a convenient list of verbs associating with the sister is, while also a beautiful and fitting way to end the essay, exactly the kind of trick GPT shows itself to be quite good at
What I like to think about with GPT is to what extent this just shines light on how we as humans compose text. For instance, that kind of long-term association/reuse process you describe sounds like a stylistic technique a human would use. Sure, GPT doesn't have full understanding of why it uses the technique, but watching it "almost do it right" is a great way to learn how style works in isolation, imo.
Sort of. You can see that each successive version of the essay contains what the author had written before, to which she had added more. She then lets GPT3 continue from wherever she's left off each time. It's actually an inspiringly clever way to use such a model
It's also quite good at coming up with some good plausible items to continue a list with, which actually gets it into trouble in terms of sounding like human writing sometimes, like in the absurdly long list of body parts featured in one generative example
So that's why I think it was a pretty plausible generated ending, though of course it's curated here, so it might not have been the first thing the model came up with. The concept of "existing" as a verb had been developed multiple times in the essay as something the author felt she had lost when her sister died, and the thought that the author felt she didn't exist was often tightly associated with thinking about the loss directly in a way that's quantifiable without that sophisticated of a language model, so along with math, which makes sense if your list is of life skills broadly that you could learn from someone, the model bringing up "existing" when there was a convenient list of verbs associating with the sister is, while also a beautiful and fitting way to end the essay, exactly the kind of trick GPT shows itself to be quite good at