I might have failed to get across my point in my rambling... my main interest is in using GPT as an "intelligent Google" to answer real questions based on the billions of pages of text it's read.
Unfortunately, while 95% of the time you do get real, accurate, helpful results, 5% of the time it just pulls some shit out of its ass and hands it to you—and does so with extreme confidence and eloquence.
I find this combination extremely dangerous: it does exactly the right thing almost all of the time, and then slips in little landmines here and there for you to discover.
Unfortunately, while 95% of the time you do get real, accurate, helpful results, 5% of the time it just pulls some shit out of its ass and hands it to you—and does so with extreme confidence and eloquence.
I find this combination extremely dangerous: it does exactly the right thing almost all of the time, and then slips in little landmines here and there for you to discover.