Not my experience at all. Are you counting the entire answer in your time?
If so, consider adding one of the “just get to the point” prompts. GPT4’s defaults have been geared towards public acceptance through long-windedness which is imo entirely unnecessary when using it to do functional things like scp a file.
> LOL, it’s not just for “public acceptance”. Look up Chain of Thought. Asking it to get to the point typically reduces the accuracy.
Just trying to provide helpful feedback for you, this would have been a great comment, except for the "LOL" at the beginning that was unnecesary and demeaning.
Yeah, I would say this is a prompting problem and not a model problem. In a product area we're building out right now with GPT-4, our prompt (more or less) tells it to provide exactly 3 values and it does that and only that. It's quite fast.
Also, use case thing. It is very likely the case that for certain coding use cases, Phind will always be faster because it's not designed to be general purpose.
It lets you specify verbosity from 1 to 5 (e.g. "V=1" in the prompt). Sometimes the model will just ignore that, but it actually does work most of the time. I use a verbosity of 1 or 2 when I just want a quick answer.
I've asked it how to scp a file on Windows 11 and it'll take a minute to tell me all the options possible.
If this takes 1/5th the time for equivalent questions, I'd consider switching