Hacker News new | past | comments | ask | show | jobs | submit login

Not my experience at all. Are you counting the entire answer in your time?

If so, consider adding one of the “just get to the point” prompts. GPT4’s defaults have been geared towards public acceptance through long-windedness which is imo entirely unnecessary when using it to do functional things like scp a file.




LOL, it’s not just for “public acceptance”. Look up Chain of Thought. Asking it to get to the point typically reduces the accuracy.


> LOL, it’s not just for “public acceptance”. Look up Chain of Thought. Asking it to get to the point typically reduces the accuracy.

Just trying to provide helpful feedback for you, this would have been a great comment, except for the "LOL" at the beginning that was unnecesary and demeaning.


You are being snarky but are right. I have scripts set up to auto summarise expansive answers. I wish I could build this into the ChatGPT ui though.


I know this is silly, but I've had great success asking chatgpt to summarise chatgpt's answers.


Try the custom instructions feature


The words "briefly" or "without explanation" work well.

By keeping the prompt short, it starts generating output quicker too.


Yeah, I would say this is a prompting problem and not a model problem. In a product area we're building out right now with GPT-4, our prompt (more or less) tells it to provide exactly 3 values and it does that and only that. It's quite fast.

Also, use case thing. It is very likely the case that for certain coding use cases, Phind will always be faster because it's not designed to be general purpose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: