Hacker News new | past | comments | ask | show | jobs | submit login

Ha, we tried that! Didn't make a noticeable difference in our benchmarks, even though I've heard the same sentiment in a bunch of places. I'm guessing whether this helps or not is task-dependent.



Agreed. I ran a few tests and observed similarly that threats didn't outperform other types of "incentives" I think it might some sort of urban legend in the community.

Or these prompts might cause wild variations based on the model and any study you do is basically useless for the near future as the models evolve by themselves.


Yeah, the fact that different models might react differently to such tricks makes it hard. We're experimenting with Claude right now and I'm really hoping something like https://github.com/stanfordnlp/dspy can help here.


I hoped it was too good to be just a joke. Still, I will try it on my eval set…


I wouldn't be surprised to see it help, along with the "you'll get $200 if you answer this right" trick and a bunch of others :) They're definitely worth trying.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: