Hacker News new | past | comments | ask | show | jobs | submit login

It uses RAG, so your whole repo is indexed locally, but of course anything relevant is put into the context window in order to get results. So in that sense it's no different from any other LLM solution, and the models you use are pluggable. You can even provide you own Open AI/Anthropic/etc. API keys.

I honestly think the "oh no, enterprises are scared about where are code goes" is overblown. I mean, companies host tons of infrastructure on AWS, GCP, Azure, etc. And heck, why would a company trust GitHub (a subsidiary of Microsoft) Enterprise's guarantees and not trust Open AI's (basically a subsidiary of Microsoft at this point) guarantees?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: