I followed your advice and tried with Temperature 0.0 and rather than paste more walls of text I edited what I got. Sorry if that's bad form? I also upped the max tokens to 100 to better compare with the fb can model. Interestingly whilst the davinci 003 model changed output and was still subpar IMHO, the davinci 002 didn't change at all. I wonder if its cached internally.
If I am not mistaken, the temperature parameter controls the amount of randomness in the output. A temperature of 0 will always produce the same output. It is not caching as far as I know.
yes, T=0 means no randomness, and given sufficient tokens the output should always be the same (in such case whether they cache is a matter of hit ratio, I'd guess)