Ha, we tried that! Didn't make a noticeable difference in our benchmarks, even though I've heard the same sentiment in a bunch of places. I'm guessing whether this helps or not is task-dependent.
Agreed. I ran a few tests and observed similarly that threats didn't outperform other types of "incentives" I think it might some sort of urban legend in the community.
Or these prompts might cause wild variations based on the model and any study you do is basically useless for the near future as the models evolve by themselves.
Yeah, the fact that different models might react differently to such tricks makes it hard. We're experimenting with Claude right now and I'm really hoping something like https://github.com/stanfordnlp/dspy can help here.
I wouldn't be surprised to see it help, along with the "you'll get $200 if you answer this right" trick and a bunch of others :) They're definitely worth trying.