I generally use it for boilerplate tasks like “here’s some code, write unit tests” or “here’s a JSON object, write a model class and parser function”.
Claude is significantly faster, so even if it requires a couple more prompt iterations than GPT4, I still get the result I need earlier than with GPT4.
GPT4 also recently developed this annoying tendency to only give you one or two examples of what you asked for, then say “you can write the rest on your own based on this template”. I can’t overstate how annoying this was.
> GPT4 also recently developed this annoying tendency to only give you one or two examples of what you asked for, then say “you can write the rest on your own based on this template”. I can’t overstate how annoying this was.
The last model "update" has really ruined GPT-4 in this regard.
When was this? I noticed chatGPT becoming succinct almost to the point of being standoffish about a week or two ago. Probably exacerbated by my having some custom instructions to tame its prior prolixity.
About a week ago, I noticed it, tweeted it, and a bunch of people said that /r/chatGPT and other forums noticed the really poor context-awareness around the same time.
It might be better on average but I don’t think it’s better for every task.
All the others are only going to get better too.