Hacker News new | past | comments | ask | show | jobs | submit login

Do LLMs need to see some information just once to answer about it? My understanding was that they need to see many examples of the same thing to be able to answer it correctly (generate text around it correctly).

Shouldn't this mean that it is safe to ask GPTs about something proprietary if it was just once because rare examples should just disappear in weights of everything else. And this also means that even GPT4 won't be able to answer any queries about obscure or rare knowledge it has seen in its training dataset.




Some recent discoveries indicate that LLMs might remember information after seeing it only once. See e.g.: https://www.fast.ai/posts/2023-09-04-learning-jumps/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: