> refactoring larger existing queries to a very different approach
Huh, that's really interesting. I've found LLMs (mostly Claude) to be pretty bad at writing SQL (they love cross joins for some reason), so it's interesting that others are getting good results. What models are you using, do you do any particular prompt engineering or anything different?
I usually start out by pasting a simple (correct, executing) stub query in, prefacing it with "this query does <general thing I'm trying to do>", and then go step by step: "Filter for x", "now add a join to this other table <definition>" etc., and build the larger query iteratively.
Another pattern I've found pretty useful: "This isn't working. Build a small sample dataset to exercise your query, and I'll paste the results back to you so we can both see what might be going wrong."
Basically, I treat it as an intern, not as an oracle of truth.
Huh that's helpful, thanks! To be fair I mostly only need it for new dialects but my successful attempts with other tools seem to follow the same pattern.
Huh, that's really interesting. I've found LLMs (mostly Claude) to be pretty bad at writing SQL (they love cross joins for some reason), so it's interesting that others are getting good results. What models are you using, do you do any particular prompt engineering or anything different?