Part of why I don't use ChatGPT very much for work is that I don't want to feed significant amounts of proprietary code into it. Could be the one thing that actually gets me in trouble at work, seems risky regardless. How is it you're comfortable with doing so? (Not asking in a judgmental way, just curious. I would like to have a LLM assistant that understood my whole codebase, because I'm stumped on a bug today.)
I'm not doing it right now, I'm more imagining a near-term product designed for this (maybe even with the option to self-host). Current LLMs probably couldn't hold enough context to analyze a whole codebase anyway, just one file at a time (which could still be useful, but)