It doesn't explain why the model only refers to the documentation though.
Basically what they are doing is giving GPT-3 a prompt that includes the (semantically relevant) pieces of the documentation.
But I don't see why a User can't ask about something that is tangentially relevant ("Who is Bill Gates?") and get an answer that really comes from GPT-3 pre-existing knowledge.
And very clever way for these guys to go from "hey we found this cool problem" to "well, did you notice that it ends up super complex and slow? Well well well, we could make it so much better with... wait for it... a data pipeline!"
("...and don't we just happen to sell data pipeline software! What a coincidence!")
lol.
Great read, thank you for the recommendation.
It doesn't explain why the model only refers to the documentation though. Basically what they are doing is giving GPT-3 a prompt that includes the (semantically relevant) pieces of the documentation.
But I don't see why a User can't ask about something that is tangentially relevant ("Who is Bill Gates?") and get an answer that really comes from GPT-3 pre-existing knowledge.
And very clever way for these guys to go from "hey we found this cool problem" to "well, did you notice that it ends up super complex and slow? Well well well, we could make it so much better with... wait for it... a data pipeline!"
("...and don't we just happen to sell data pipeline software! What a coincidence!")
lol. Great read, thank you for the recommendation.