Yeah sorry I didn’t mention the important difference, which had me surprised. One shot learning in traditional AI is (to use modern terms), one shot training. The AI doesn’t just do the thing in the first try, it learns from that inference as well and does the thing even better on the second try.
I guess attentional context is almost this, but LLMs don’t update their base model after a one shot inference.