Unfortunately, LLMs are still prone to make facts up, and very persuasively. In fact, most of non-trivial topics I tried have required double-/triple- checking, so it's sometimes not really productive to use chatGPT.
You are correct that I made an error in my previous response.
I apologize for the confusion I may have caused in my previous response
I appreciate you bringing this to my attention.
I apologize, thank you for your attention to detail!
I asked it to explain how to use a certain Vue feature the other day which wasn't working as I hoped. It explained incorrectly, and when I drilled down, it started using React syntax disguised with Vue keywords. I definitely could have tried harder to get it to figure out what was going on, but it kept repeating its mistakes even when I pointed them out explicitly.
Part of why I don't use ChatGPT very much for work is that I don't want to feed significant amounts of proprietary code into it. Could be the one thing that actually gets me in trouble at work, seems risky regardless. How is it you're comfortable with doing so? (Not asking in a judgmental way, just curious. I would like to have a LLM assistant that understood my whole codebase, because I'm stumped on a bug today.)
I'm not doing it right now, I'm more imagining a near-term product designed for this (maybe even with the option to self-host). Current LLMs probably couldn't hold enough context to analyze a whole codebase anyway, just one file at a time (which could still be useful, but)
- Explanation/research ("how does this work?")
- Code analysis ("tell me if you think you see any bugs, refactoring suggestions, etc in this sprawling legacy codebase")
Things that feed into the developer's thought process instead of crudely trying to execute on what it wants