Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious, are you all still writing custom prompts regularly?

I was deeply involved in prompt engineering and writing custom prompts because they yielded significantly better results.

However it became tedious especially since each update seemed to alter the way to effectively direct ChatGPT’s attention.

Nowadays I occasionally use a custom ChatGPT but I mostly stick with stock ChatGPT.

I feel the difference in quality has diminished.

The results are sufficiently good and more importantly the response time with larger prompts has increased so much that I prefer quicker ‘good enough’ responses over slower superior ones.

It’s easier to ask a follow-up question if the initial result isn’t quite there yet rather than striving for a perfect response in a single attempt




I think we need to shift from “prompt engineering” to “prompt vibing”— there is an astonishing lack of actual prompt engineering (eg A/B tests with evaluations) — and it usually isn’t the right frame of mind. People need to develop intuition for chatGPT — and use their theory of mind to consider what chatGPT needs for better performance.

Most people can get good with chatGPT if they know how to edit their prompts (it’s basically a hidden feature—and still not available in the app). Also, I recommend a stiff cocktail or a spliff — sobriety is not the best frame of mind for learning to engage with AI.

Obviously I need some controlled experiments to back of that last claim, but our human subjects board is such a pain in the ass about it…


I'm very interested to know more about what "editing prompts" is! And where to find/use it?


Just hover over your last prompt and an edit icon should pop up under the prompt


I may be misrepresenting, as I have used the feature only a couple of times, and not recently.

But, if you edit your prompt (or subsequent prompt), you're creating a branch in the conversation and you can switch between branches.


This will be fun to play around with. Thank you!


I have the same experience. I'm sure they are constantly finetuning the model on real user chats, and it is starting to understand low-effort "on the go" prompts better and better.


Interesting that you see a slower response time with a large input - I don't see any speed degradation at all. Is that maybe just on the free tier of ChatGPT?


I'm on paid (rich, I know) and the performance is all over the place. Sometimes it'll spit out a whole paragraph almost instantly and other times it's like I'm back to my 2400bps modem.

I haven't noticed prompt size having an impact jut I'll test that.


This reflects my experience. Sometimes I'll provide a single sentence (to GPT-4 with the largest context window) and it will slowly type out 3 or so words every 5 seconds, and in other cases I'll give it a massive prompts and it returns data extremely fast. This is also true of smaller context window models. There seems to be no way to predict the performance.


Oh hey... leep an eye on your CPU load. The problem might be on the near end. In my case on a slower machine it slows down if you're dealing with a very long chat.

(DO report this as a bug if so)


I think that's not the issue here but I do notice the browser going crazy after a while of chatting with ChatGPT. The tab seems to consume a baseline CPU while doing nothing. I just brush it off and close it... bad JavaScript maybe. I should look into this and report as a bug, thanks for the advice.


This is basically how I respond to requests myself. Sometimes a single short sentence will cause me to slowly spit out a few words. Other times I can respond instantly to paragraphs of technical information with high accuracy and detailed explanations. There seems to be no way to predict my performance.


Early on, I noticed that if I ask ChatGPT an unique question that might not have been asked before, it'll split out a response slowly, but repeating the same question would result in a much quicker response.

Is it possible that you have a caching system too so that you are able to respond instantly with paragraphs of technical information to some types of requests that you have seen before?


Yes, search for LLM caching and semantic searches. They must be using something like that.


I cannot tell if this comment was made in just or in earnest.

As far as I understand, the earlier GPT generations required a fixed amount of compute per token inferred.

But given the tremendous load on their systems, I wouldn’t be surprised if OpenAI is playing games with running a smaller model when they predict they can get away with it. (Is there evidence for this?)


I'm guessing there are so many other impacts of own on the model that size of print probably gets lost. I can see a future where people are forecasting updates to ChatGPT like we do with the weather.


Yeah. It has so many moving parts that I doubt anyone can make a science out of it, but people will try for sure. Just like with most psycology/social experiments and SEO. I'm flooded with prompt engineering course spam these days.


I typically notice the character by character issue with complex prompts centered around programming or logic. It feels kind of like the model is thinking, but my guess is that the prompt is being dispatched to an expert model that is larger and slower.


If you mean the “analyzing” behavior, the indicator can be clicked on to show what it’s doing. It’s still going character-by-character, but writing code that it executes (or attempts to) to get the contents of a file, the solution for an equation, etc. Possibly an expert model but it seems like it’s just using an “expert prompt” or whatever you want to call it.


Interesting, no I'm on the pro tier aswell. So you're telling me you never get the character-by-character experience?

Edit: What prompt sizes are we talking about?

Even with small prompts I occasionally get rather slow responses but it becomes unbearable at 2000-3000 characters (the upper limit of custom instructions), at least for me it does.


Canceled my account after they made it impossible to disable chatgpt4 reaching out to Bing.


> for this thread, let's make a key rule: do not use your browsing tool (internet) to get info that is not included in your corpus. we only want to use info in corpus up until dec 2023. if you feel you need to use browsing to answer a question, instead just state that the required info is beyond your scope. the only exception would be an explicit request to use the browsing tool -- ok?


That doesn't mean it follows that instruction. Or if it does today it doesn't mean it does tomorrow.


"As I craft this prompt, I am mindful to stay within the bounds of your extensive training and knowledge as of April 2023. My inquiry does not seek current events or real-time updates but rather delves into the wealth of information and creative potential you possess. I am not inquiring about highly specific, localized, or recent events. Instead, I am interested in exploring topics rooted in historical, scientific, literary, or hypothetical realms. Whether it is a question about general knowledge, a creative scenario, a theoretical discussion, or technical explanations in fields like science, technology, or the arts, I trust in your ability to provide insightful and comprehensive responses based solely on the information you've been trained on."

Tried this prompt, given to me by chatgpt4, and it went out to bing on my first attempt. So yeah. No.


You can use their "ChatGPT Classic" GPT for that - or build your own, I made one called "Just GPT-4".


If it's discoverable in app, someone please fill in the details. But googling for the "Classic ChatGPT" leads to this link[1] which I have no way to verify was actually created by OpenAI and is described as "The latest version of GPT-4 with no additional capabilities."

So, buyer beware, but posting this link in case it does help someone.

[1] https://chat.openai.com/g/g-YyyyMT9XH-chatgpt-classic


Yess, that's it - you can confirm it's "official" by browsing for it in the GPT directory, screenshot here: https://gist.github.com/simonw/dc9757fc8f8382414677badfefc43...


Thanks, but I'm still going to pass.


>I was deeply involved in prompt engineering and writing custom prompts because they yielded significantly better results.

No, no you weren't. Prompt engineering never was, is not currently, and never will be, a thing.


The term has become a staple in the vocabulary of LLM users/enthusiasts.

Would you prefer if I used ‘iterative prompt design’ potentially leaving people confused about what exactly I meant?


In what world is this type of response ideal?


Who cares? If you have a comment to make, at least back it up with something interesting.


That's a pretty absolute statement


Why comment this?


Oooh, got 'em, big guy. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: