Hacker News new | past | comments | ask | show | jobs | submit login

The answer you get in 10 seconds is worthless, though, because you need to know what SQL the ORM is actually generating, not what it might reasonably generate.



You are thinking in a too binary way. It's about getting insights/signals. Life is full of uncertainties in everything. Nothing is for sure. You must incorporate probabilities in your decisions to be able to be as successful as you can be, instead of thinking either 100% or 0%. Nothing is 100%.


But it is a meaningless signal! It does not tell you anything new about your problem, it is not evidence!

I mean, I could consult my Tarot cards for insight on how to proceed with debugging the problem, that would not be useless. Same for Oblique Strategies. But in this case, I already know how to debug the problem, which is to change the logging settings on the ORM.


Well, based on my experience, it does really, really well with SQL or things like that. I've been using it basically for most complicated SQL queries which in the past I remember having to Google 5-15min, or even longer, browsing different approaches in stack overflow, and possibly just finding something that is not even an optimal solution.

But now it's so easy with GPT to get the queries exactly as my use-case needs them. And it's not just SQL queries, it's anything data querying related, like Google Sheets, Excel formulas or otherwise. There are so many niche use-cases there which it can handle so well.

And I use different SQL implementations like Postgres and MySQL and it's even able to decipher so well between the nuances of those. I could never reproduce productivity like that. Because there's many nuances between MySQL and Postgres in certain cases.

So I have quite good trust for it to understand SQL, and I can immediately verify that the SQL query works as I expect it to work, and I can intuitively also understand if it's wrong or not. But I actually haven't seen it be really wrong in terms of SQL, it's always been me putting in a bad prompt.

Previously when I had a more complicated query I used to remember a typical experience where

1. I tried to Google some examples others have done.

2. Found some answers/solutions, but they just had one bit missing what I needed, or some bit was a bit different and I couldn't extrapolate for my case.

3. I ended up doing many bad queries, bad logic, bad performing logic because I couldn't figure out a way how to solve it with SQL. I ended up making more queries and using more code.


This is for a performance issue and the Laravel code base is straightforward to map to SQL. It is to get a rough idea of the joins and the filters to see if there is potentially an index missing.

This is low hanging fruit. ChatGPT can do this, and also easy to verify it got it right.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: