But the one building the coherent essay is the author of the article, not GPT-3. The author of the article is taking GPT-3 output that sounds good to them, and stitching it together into an essay.
Also note, the one giving the words meaning in the first place is you. GPT-3 is simply repeating patterns that it discovered, but it doesn't have any notion of meaning, any model of the world outside "this is what text I expect to see following this text".
I am not merely being philosophical here. Two GPT-3 instances couldn't "teach" each other even the slightest bit of new information, or prime each other to get some specific kind of response, perhaps trying a few different things to see which prompts are more likely to produce the desired outcome in the other instance - because there is no "desired outcome" - it's all just trying (and usually succeeding exceedingly well!) to produce text that sounds like text it has seen before, while matching the prompt.
Also note, the one giving the words meaning in the first place is you. GPT-3 is simply repeating patterns that it discovered, but it doesn't have any notion of meaning, any model of the world outside "this is what text I expect to see following this text".
I am not merely being philosophical here. Two GPT-3 instances couldn't "teach" each other even the slightest bit of new information, or prime each other to get some specific kind of response, perhaps trying a few different things to see which prompts are more likely to produce the desired outcome in the other instance - because there is no "desired outcome" - it's all just trying (and usually succeeding exceedingly well!) to produce text that sounds like text it has seen before, while matching the prompt.