Hacker News new | past | comments | ask | show | jobs | submit login

GPT 4 tells me that on a daily basis because of the 2021 data cutoff.



So it never hallucinates APIs, theorems, biographies or places?


Well I did at one point compare the outputs for two biographies of not very well known people, 3.5 made up half of the data, 4 only said it doesn't know anyone by that name for both. I don't think I ever tried asking about any places or theorems specifically.

As for APIs, well I try to always provide adequate context with docs, otherwise it may still on occasion make up some parameter that doesn't exist or uses another library by the same name. Half the time it's really my fault by asking for something that just isn't possible in a last ditch effort to see if it can be done in some convoluted way. It sort of assumes "the customer is always right" even if it contradicts with what it knows is wrong I guess. It gets it right usually when it's at least mostly straightforward to implement something though.


It's much better at avoiding hallucination than the 3.x generation was.

The first derivative is all that matters. ML gets better over time; we don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: