GPT-3, for the first time you need to read a few minutes to know fake from real. I used to play with LSTM language models and they barely made sense 10 words at a time. The difference is a huge leap. My wish list for GPT-4 is: multimodal - to learn text, image, video and other modalities. Multitask - cultivate as many of its skills as possible, maybe it learns to combine skills in new ways. Longer sequence and additional memory, maybe even ability to use search/retrieval for augmentation. And last - recursive calls - ability to call itself in a loop and solve sub-problems. I hope in the future a pre-trained GPT-n variant will be a standard chip on the motherboard.