Do LLMs need to see some information just once to answer about it? My understanding was that they need to see many examples of the same thing to be able to answer it correctly (generate text around it correctly).
Shouldn't this mean that it is safe to ask GPTs about something proprietary if it was just once because rare examples should just disappear in weights of everything else. And this also means that even GPT4 won't be able to answer any queries about obscure or rare knowledge it has seen in its training dataset.
Shouldn't this mean that it is safe to ask GPTs about something proprietary if it was just once because rare examples should just disappear in weights of everything else. And this also means that even GPT4 won't be able to answer any queries about obscure or rare knowledge it has seen in its training dataset.