Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Greptile (YC W24) - RAG on codebases that actually works
253 points by dakshgupta on March 5, 2024 | hide | past | favorite | 166 comments
Hi HN, we're the co-founders of Greptile, a tool that can accurately answer questions about complex codebases. Developers use us to spend less time wrestling with codebases and more time actually writing code. Here's a demo: https://youtu.be/qI24eKO1YX0. You can try it on 100 popular repos here: https://app.greptile.com/repo, and on your own repo (if you give permission - more on that below) here: https://app.greptile.com.

We are far from the first people to try "RAG on your codebase". We focus on full codebase comprehension: using LLMs to accurately answer difficult questions with full context of large, complex, and even multi-repo codebases.

Simple RAG alone is not sufficient for this task. Codebases aren’t like most PDFs, docs, or other similar data types. They are graphs—complex puzzles where each piece is interlinked. So Greptile does a few things past simple RAG:

(1) Instead of directly embedding code, we parse the AST of the codebase, recursively generate docstrings for each node in the tree, and then embed the docstrings.

(2) Alongside vector similarity search and keyword search, we do “agentic search” where an agent reviews the relevance of the search results, and scans the source code to follow references that might lead to something important. Then it returns the relevant sources.

For example, here are a couple questions that this system is able to answer in our test repo that simple RAG couldn’t (in our experience):

Where are the auth providers configured?” (They are in an array inside of an options.ts file, where looking at the file it’s not obvious it’s an auth related file. However, because that array is imported into the auth/route.ts file, Greptile’s agent traces and find it)

How would I add a postgres connector?” (The best way to answer this is to see how the Redis connector is set up and mirror it. Simple RAG sometimes retrieves some of the code for the Redis connector, but Greptile’s agent follows the connections to retrieve all the code that the redis connector touches, and uses that to write instructions.)

Developers (including at Stripe and Microsoft) are using Greptile for things like:

Debugging—you can paste in an error message and it does a pretty good job of diagnosing the root cause and suggesting fixes.

Grokking OSS repos—for example, if you're forking a repo, modifying it for your usecase, or just integrating it, Greptile lets you add multiple repos and dependencies in the same chat session so it has full context.

Parsing legacy code at work—especially if original engineers have left the company.

Since we're accessing your private code, we're very careful with security. We don't store any code on our servers after initial processing, and just pull snippets as needed from the GitHub API.

Quick note: when you sign in with GH, it might ask for permission to "act on your behalf". This is a quirk of GitHub's wording—our permissions are read-only and the only thing we do "on your behalf" is read code, so we can index the repo.

We came up with this idea while working at AWS—the codebase was super complicated, the docs were sparse and out of date, and our team was remote so it was slow to get answers to questions. We picked "greptile" because of "grep" and also we just wanted a somewhat silly name.

Try it out! It's a work in progress, so any feedback is appreciated. Here are the links again: for popular open source repos see https://app.greptile.com/repo, and to get it working on your own repo, start at https://app.greptile.com.

If you have experience working with a complex codebase at work or for a project, I’d love to hear about it. It really helps us educate our product direction. Looking forward to comments!

edit. For those who want to try this on large or private repos, here is a promo code for a free month: HACKERNEWS100




Works well. Today I was working with how Rail's works with BigDecimals, so (knowing the answer) I asked:

"When using "as_json" in a controller to return the JSON of a model, how are BigDecimal's encoded?"

Answer: "When using as_json in a controller to return the JSON of a model, BigDecimal values are encoded as strings. This behavior is defined in the active_support/core_ext/object/json.rb file, specifically in the BigDecimal class extension for JSON encoding. The rationale behind this approach is that most..." which is exactly the case as I learnt through various PR's, Issues and code review.

This would have saved me about 30mins of work. I wonder if it takes into account the metadata, such as authors, related comments, issues and PRs?


Thanks for checking it out! Currently no metadata, just code. Were adding commit messages and PRs next. Issues and comments makes a lot of sense, adding that to the list.


We don't do direct look up/indexing for authors (although the authors file is usually somewhere in the repo for larger projects), comments, issues, or PRs just yet but that is definitely something we are looking to add.


Ran it on a "real" OSS project of mine (https://github.com/dvx/lofi/), and it was stuck at 99% loading for about 30 minutes. Then, when it finally parsed the codebase, when asked anything it always returns "Error: Internal error while locating sources." Specifically, I wanted to see if it can context switch between TypeScript (used for the front-end), ObjectiveC (used for a few Mac features), C++ (used for Windows volume features), and GLSL (used for visualizations). But alas.

At one point, this random prompt popped up: https://imgur.com/a/mYeluaU —what's "Onboard?" Is this some kind of weird LLM leakage/hallucination?

With all respect, this is like a pre-MVP quality product. The codebase isn't even particularly large and the experience is extremely sub-par. Charging for something like this is honestly highway robbery.


Hey sorry to hear that, a couple of things:

- processing is usually stuck at 99% because when we order the components of the repository by file-directory dependency and ast dependency, there are a lot, LOT more leaves than the internal nodes + the root. Since we have to have the results of the dependency before we move on, moving up the dependency chain with llm calls take a while. This is even more pronounced when nearing the root for completion. We are working on optimizing this flow as it is very annoying for us as well.

- "Error: Internal error while locating sources": this is embarrassing but we did experience a database outage today (I wonder why) some repositories have a faulty status. It should be back up now and we are working to recover/reprocess the repos that have failed during the outage (including dvx/lofi)

- "Onboard" that was our previous name, this has slipped through the cracks, thanks for pointing it out!

We are trying to parse most of the popular open source repos (we have processed repos like python, vscode, etc). We are hoping to fully process the linux kernel soon as well (a personal benchmark of mine).


Just wanted you to know that the formatting of your quotes is messed up. You probably need to add some more linebreaks.


> processing is usually stuck at 99%

Why call it 99% then, call it 49%.


Noted - I think Soohoon meant to use the word "often" and not usually. That said, you're right, counting completed nodes would be a more accurate way of measuring progress.


Thanks for reporting the errors. You're right that it's far from perfect. I will say that part of this is due to higher than expected traffic today. We're working hard on stability in general, so hopefully next time you use it you will have a smoother experience.


Loved the idea behind the product, @dakshgupta. Hiccups do happen, don't worry much about them. Keep iterating!


Thank you!


Got the same error....


You’re going to want to define the acronym RAG before you use it a dozen times in your marketing copy.

Presumably it’s great news that I can RAG on my codebase. But I’m not sure whether I’ve ever ragged anything in my career or whether I’ll want to now.

If you told us what it meant, we could probably understand what your thing does.


Thanks Jason, you're right. We spend so much time around RAG that we forget this is niche term.

RAG -> retrieval augmented generation. Given a query, finding the parts of the codebase most relevant to the query, and supplying it to the LLM so it can answer the query. The typical way to do this is semantic similarity, or chunking + embedding the corpus being searched, embedding the query and finding the k most cosine-similar chunks.


RAG vs fine-tuning makes a difference to you, but not to your users, who don't need to understand how they're not the same thing in order to use your product.


Thanks. Even that is pretty dense for someone who hasn’t been living and breathing LLMs for the last year.

https://xkcd.com/2501/

You’ll want to find a balance between telling people what your thing does with no jargon or acronyms whatsoever, while signaling to people who know as much as you about this stuff (all four of them) that it’s got those cool ingredients.

All the best.


A simpler way to put it is it's a technique to get around the context limit/attention issues of an LLM... to inject the context that is most relevant to the user's query.


Haven't seen that xkcd before, that's really funny. Good advice on calibrating the jargon level. Thanks!


I like clever project names :)

This looks great - I just tried to generate sample code in the react repo and was pleasantly surprised. Do you have a sense of whether this works well to generate code in general, i.e. generate an API route to return X data that works similar to the other API routes.


We love the name as well.

We haven't built it for code gen in particular, but a lot of our users seem to be using it for that. We want to really nail down people getting to understand what is going on in the codebase itself.


Thanks! This should work. The closest thing I tried was with quary.dev, where I had it generate a postgres connector and it referred to existing db connectors to generate the changes.


"We don't store any code on our servers after initial processing"

Are you storing the embedding vectors you've calculated from the code? If so, those are likely quite easily reversible - so I would still consider that source code stored on your servers from the point of view of a security audit.

As a result, I might actually prefer to have copies of my code stored on your servers if it resulted in faster performance.


For now we are storing the embeddings of the generated docstrings, the intuition behind this is kinda like how HyDE works. I don't think the actual code itself is reversible from the embeddings, but yes we do want to store the actual code at some point and we will need the proper security measures for that. We are also actively developing an self-hostable version for enterprise.


> embedding vectors you've calculated from the code? If so, those are likely quite easily reversible

I don't think embeddings are generally reversible... you're usually projecting onto a lower dimensional space, and therefore losing information.


You might be interested in "Text Embeddings Reveal (Almost) As Much As Text":

> We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.

https://arxiv.org/pdf/2310.06816.pdf

There's certainly information loss, but there is also a lot of information still present.


Yeah, that paper is what I was thinking about. https://simonwillison.net/2024/Jan/8/text-embeddings-reveal-...

“a multi-step method that iteratively corrects and re-embeds text is able to recover 92% of 32-token text inputs exactly”.


"Quite easily" isn't true in most cases, but embeddings are sometimes reversible. We know this because programs like Stable Diffusion sometimes output near-perfect copies of training data when given the correct prompt, and generation of that image is based on word and image embeddings alone.


I’ve never heard of reversible embeddings in practice.

In theory if you know the model being used you could reverse them.


I'd love to try it, but pretty much all my repos are >10mb. It's not because there is that much code, but because I am doing bioinformatics and the test files (for the unit tests) inflate the repo size. It would be great if there was a way to test it on just 1 large repo for perhaps a week or something, because I balk at the idea of spending $20 a month on something that I don't even know works well.

This is important because I'm not deeply familiar with public projects, so I can't accurately assess if the tool is worthwhile. Whereas with one of my repos, I'd be able to tell quality pretty quickly.


Just made a checkout code so you can use for free

HACKERNEWS100


Do I still need to provide credit card info to use the promo code? No bueno.


Use privacy.com for a single use credit card number


Nice, appreciate the suggestion.


As an aside I suggest using dvc. It stores only metadata of large files in the got repository while maintain a separate data repository. The data repository can also be synced with cloud storage like AWS. It really helps manage code better without raw data


Never heard of it, thanks for sharing. Will look into this today.


pretty easy to fork your repo and remove all the test files.


Maybe we need a `.greptileignore` file...


That's funny, we actually do ignore those files but realizing now we don't account for that in our calculation of codebase size since we just get that via the GH API.


This comes from a place of love and empathy -- I've been here, am here, and will be here again -- but this realization reads like the first step in the long, long road from prototype to product.


You are right


How does it compare with something like Bloop, which also uses a combination of a syntax tree, Embeddings, FTS and LLMs?


Haven’t tried it personally :/


“Where we going we don’t need docs”. That scares me… docs are among other things there to provide context and info for things not clear from why certain choices were made or not made… no way your AI is going to guess that I put that restriction because of an explicit request from product, despite it looking wrong…


Cofounder of Greptile here, good documentation is not going anywhere. What we do hate is going through bad documentation to find the tiny bit of information we need that is more often than not outdated to fit our needs. We are looking for ways to integrate information not explicitly written in code to understand codebases as well.


Indeed. Code describes how something is done, and possibly what is done, but it seldom fully describes why something is done, or why code exists in the first place.

The latter is typically the realm of requirements, design documentation, and possibly test plans.

Generating such documentation from code seems quite impossible, or at least wrong.

It is unfortunate that most open source software is lacking in such documentation, giving off bad signals to junior developers.

Edit: not to take away from the product being discussed here, which seems very useful! I am merely supporting the parent here.


> The latter is typically the realm of requirements, design documentation, and possibly test plans.

You forgot one of the biggest spots for "why" documentation... git commit messages, where the point is to say why you made this change. Maybe taking into account the commit messages around the code in question would help.


"Initial commit" is my goto commit message.

Still a great idea, though. Ingesting all data from JIRA may be beneficial as well.


Yep... and the why of that message is apparent and makes perfect sense for the first commit. If you only ever start projects and never work on them again, that is all you need.


If you don't use proper commit messages you also probably don't have proper docs


We already do that and more at Maru.ai - reach out if you're interested.


This is a great point, and something we think about. I think to an extent a super smart version of our product could infer intent from the code but there would be definitely something amiss if the author's intent was just never available.


Yep, agreed. As a founder of another YC company that's done a bunch of RAG on code and docs, the accuracy boost you get from having good docs makes a massive difference. You really do need a human interpreter to show you "how to use" a product.


This sort of tech will just spur developers to put proper comments in their code, which they should have been doing in the first place.


I hope so! Unfortunately it will definitely be massively increasing the ratio of documentation to intentional documentation, so we'll have to see whether it increases documentation usability or not.


Didn't even think about that, hopefully it turns out to be true.


Could even make it an explicit feature, to consume jsdoc/javadoc/etc comments


I can imagine the founder shout “Where we going we don’t need docs” whilst shoveling VC money into the steam engine.


I've tried it on my own C++ codebase. It's fun, and I'm impressed that it could tell me which C++ standard is used (a question which is often difficult to find an answer to on random codebases), but it's really bad at analyzing templates. The answers it gives me are always incomplete and usually at least partly or mostly incorrect. I'm surprised by this in some cases, because my questions are answered by comments in the source code.

https://app.greptile.com/share/4953cbff-13ec-4427-b0af-02889...


Examining now and looks like we’ve been heavily discounting the comments in our search. Going to look into fixing it this week.

Thanks for pointing that out!


It's cool to see tools like this. I ran into some issues though:

1. "We will email you"... "once the repositories have finished processing" Not sure you're supposed to do that without consent, when the intent was just to connect GitHub! Email use is supposed to be opt-in.

2. My tiny repo (https://github.com/lukestanley/ChillTranslator) won't load.

3. The UI for selecting a GitHub repo is hard to find and fiddly to use.

4. I couldn't see where to put the promo code.


1. Good point, didn’t think of that. We could make it a checkbox

2. Sadly, AWS just went down, absolute scramble at the office

3. Fiddly is the right word, we just changed it a couple days ago. Will make it cleaner.

4. You should only need that if you’re planning to upgrade, so you can click on the upgrade button and you should see a “Add promotion code” option on the left of checkout.


Thanks for the response. AWS going down is really bad luck!


I like to use different email addresses for different services. Please just ask me for an email address to use instead of using the one on github.


That makes sense, we should change that. Thanks for pointing it out.


don't over index on corner cases.


Not a single repo I've tried works. A lot of them seem not to have finished processing, but even the ones that have finished don't work.


This might have been related to temporary DB issues we had. Could you try again and let us know if you're still running into issues?


Love this idea and am just signed up. Thanks for the promo code! Also, I really like your blog post about shipping faster: https://greptile.com/blog/ship-faster. Shipping code is so fun that we should all be looking for ways to do more of it.


Btw, your service might be down right now. It just ingested my private repo but won't answer any questions.


Thank you for the kind words! Yes, AWS is experiencing an outage and by extension, so are we :( Should be a bit more stable now though!


How's this different from Adrenaline or Cursor or Bloop


Great question, it's definitely similar to Adrenaline and Bloop in that it is designed to do full codebase context.

Cursor is different because it's really focused on augmenting the code writing experience with code gen, while we're working on code comprehension.

A good way to look at it is - Cursor replaces your IDE, Greptile replaces/augments your internal docs.


Apparently I’m the only one here who doesn’t know this but: What is RAG?


Yeah, even on HN that's not an acryonym you should use without explaining.

My first thought for RAG is "Red, Amber, Green" as used on risks etc. with project management.


Retrieval Augmented Generation; basically, find relevant bits from a lot of data, and load the relevant bits into context.


Great answer


It's the buzzword for "make an LLM work on custom data" such as a codebase.


I only get 'Error: Internal error while processing request.' when I try to run queries. I tested three different repos, same error message appeared for each repo.


Hi! Sorry about that, HN Launch way more stress on our system than we expected. Working hard to get back to reliability.


Cool, will check it out

Does it integrate with Visual Studio, does it provide code suggestions?

Been doing a lot of back and forth iteration with ChatGPT to build a python project from scratch

It’s been a really good experience although frustratingly slow at times (from going back and forth between the browser and code and having to wait for gpt’s answers)

Can more documentation be automatically added? For example, it might be useful in a rails project to be able to get answers about the ruby and rails documentations


We have a VS code extension in the store (also called "greptile"), feel free to check it out!

Docs is a good idea but we haven't found a reliable enough scraper to add docs from a website. We were thinking it would be cool if you could drop in like "docs.stripe.com" and it automatically adds stripe docs to the context.


A JetBrains plugin might be worth looking at too. They are pretty popular in several language communities and still resisting the juggernaut that is VS Code.


Definitely, we currently have that on our roadmap for April, been cautious because IMO a bad IDE extension is way worse than no IDE extension.


After giving permission, it asked to:

"Link Your Code Hosting Providers Connect your accounts for seamless integration, and to access private repositories."

What does this mean?


If you install the GitHub app and authorize access to private repositories, Greptile will have access to them. If you don't want to do that, you can still navigate to the home page to start chatting.


Super cool! btw I love the name "Greptile" :)


Thank you!


I just keep getting: "Error: Internal error while locating sources." when trying to talk to a repo that is green and "up to date"


Hi! This is due to an AWS outage, sorry about that. We should be more stable now.


I've been looking for something like this, but local-only. Any plans to let people self-host and point at local repositories?


We’ve been grappling with making it local but two considerations

1. Haven’t figured out what should trigger updates. Every commit sounds crazy, every save would be perfect but too intensive also.

2. LLMs will not be self hosted. We haven’t been able to get the same results with any smaller LLMs

To answer your questions we definitely want to make this available for local repos and ideally self hosted, on a technical level that’s probably several month away.


Every save would work. You just ("just") need good caching. Cursor nails this. https://twitter.com/amanrsanger/status/1750023209733464559


Anyway to sign up for newsletter updated for when that happens? I’d be very keen to check this out once that feature lands.


We don’t have a newsletter but there should be a link to join our discord on app.greptile.com

You can also email me at daksh [at] greptile.com and I’ll set a reminder to let you know when it’s out


> Haven’t figured out what should trigger updates.

What triggers updates when it is not local?


When a commit code is pushed to that remote branch


OK, then why would that be crazy for local when it is the trigger for non-local?


Just because people commit locally more often than they push.

That said, I think you're right and we might be overestimating the lift there. Will look into it more.


Thanks, some interesting challenges there.


Also looking for something local but I feel like Apple will probably eventually release local LLMs for your entire filesystem.


[not part of greptile]

I've given it a fairly serious shot and my conclusion is that building IDE hooks is seriously depressing. IntelliJ deprecates APIs constantly and the docs don't really keep up.

And then there's the total dumpster fire that is trying to support GPU acceleration on multiple platforms.

Unfortunately it's easily 10x the work to do this kind of thing local only.


Does it use tree-sitter for all the AST parsing?


Yup


Getting "Error: Internal error while processing request." while trying on my personal public github repo. HN effect?


Co-founder here, I think it might be all the traffic haha. Working on a fix now.

Edit: chatting with processed repos should be working again now


Looks like some kind of bug on repos w/ many branches. Loading https://github.com/datastax/cassandra/, I search for `vsearch` and it presents me with CNDB-8708-vsearch and DSP-23946-vsearch, but not vsearch itself.


Interesting, haven't seen that before. Thanks for pointing out - looking at it now.



Congrats on launching. However I don't like the 'Act on your behalf' permission this needs.


Thanks! Yeah, it’s strangely worded for what it is. We’ve emailed GitHub about it. Our permissions are read-only but since we read “on your behalf” this is technically true.



Why don't you read on your own behalf. You don't need this permission at all. It's just to harvest email addresses right?


Private repos are one reason we'd need it, and this also gives us higher rate limits so that lots of people can use it concurrently.


Presumably if you have a private repo, they need permission to read it.


Asking questions of any repo on “repo” fails with “Error: Internal error while processing request.” This is pribably because I unlinked my Github connection after trying it out, but it shouldn’t be trying to use that in this case.


Hey, we had some database issues. Should be working again now!

Edit: Some of the repos that were processing at the time might have failed to process. Reach out to us on Discord if you're seeing errors.


The AST approach should be integrated into code generation. Instead of generating text, generate AST nodes. Something like “Copilot with Intellisense” could be a game changer.


I think you're 100% right. We have been shy to do this because we wonder if that would be a step-function improvement over GitHub Copilot, enough that people would switch. Hard to say.


What does RAG stand for?


Retrieval augmented generation. LangChain has a video series on it if you want to take a peek: https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKy...


Retrieval Augmented Generation

It's a terrible choice of acronym if you ask me.


tried to explain it here: https://nux.ai/concepts/rag


Retrieval-augmented generation.


Nice reading the steps you take to analyse the code.

I had scrolled past this article without clicking and had the same thoughts about how I'd approach this.


Can it answer customer support questions on API's cryptic error messages? E.g. Give hints on changes needed in the request payload.


Assuming you can link the source code for the API's logic - that should work. Mileage may vary depending on exactly how vague the message are. For example if a single error message could mean 10 disjointed things, might not be able to exhaustively list them.


Will it work with a large Ruby on Rails codebase?


It should, but currently because of the AWS outage things are a little choppy. If you submit it we'll make sure it gets through.


I tried asking a question about Porter and I see the error:

> Oops

> We couldn't access this repo.

> You may need to log in to view this repository, or it might not exist.


Hey looked into this error, Porter was not processed by us yet, it does take a while for a repo to be processed initially but this link should give you the progress report + you can chat with it when its done:

https://app.greptile.com/chat/github/porter-dev/porter


I followed the link because I was curious and it still said processing, so I navigated to the homepage and clicked "Try a popular repo" leading to <https://app.greptile.com/chat/3t4qpefuh9eqckcdt0b8i?repo=pos...> which then said "We couldn't access this repo." It seems the actual syntax is app.greptile.com/chat/github/postgres/postgres -> https://app.greptile.com/chat/lfbc034nkj7w03v2kb5zp?repo=git... so wherever that list of "popular repos" is coming from needs to be updated to avoid other people having bad first experiences

It also appears that the vote buttons do nothing if one is not logged in, so I'd recommend eliding them unless logged in since it's just frustrating to wonder if some JS error ate the vote or what


Hey, looking into this now. I believe the vote buttons actually do always send us feedback, but the UI could probably be better there.


Unless you are batching them up and hiding them in the upteen bazillion posthog POST requests, no, they do not

I shudder to think how much you must be spending on compute asking for that /api/poll/batch once a second, each one saying it is a Miss from cloudfront. It's especially mysterious since there's nothing on the UI that the user would see about the .filesProcessed increasing that justifies DDoSing yourself like that, IMHO


These are all great points, we should probably audit the polling.


Ah sorry looking into it now


very nice! FYI the 'free coffee' link (https://calendly.com/dakshgupta/free-coffee) identifies you as " Daksh, co-founder/CEO at Onboard."

Also I am getting the 'Error: Internal error while processing request'


Thanks for the catch! Changing it now.


Congrats on the launch! Do you need Github permissions to answer questions on open-source repos as well?


There are about a hundred open source repos you can talk to for free over at https://app.greptile.com/repo, we are looking to add more repositories as well. The permissions we get on login are from the Github App flow, you will still have to grant Greptile access specific organization/repositories for it to work with your own private repos.


Best way to get in touch if a maintainer who wants to provide public repo access?


You can email me -> daksh [at] greptile.com


No, and for the ones in app.greptile.com/repo you don’t need to log in at all.


Looks good, but there are many competitors that do exactly the same thing (even opensource ones)


[not part of greptile]

I can only think of two serious competitors, neither of which is open source. And I don't think anyone serious at either of those companies would claim that they have the definitive solution.



Can you please share some that you consider to be good?


100% there are many. Hopefully we can build one of the better ones.


Can you please share the list? I'm curious


I like your 100 repo selections :)


Thank you!


I linked to my github but can't find where to use the promo code :-/


You can upgrade to Pro and use the promo code by clicking on the 'Upgrade' button at the top!


it's an option once you start the checkout process to upgrade


10/10 on the name


> This repo failed to process

nice


Hey, our DB was temporarily down from the traffic, sorry about that. Could you try again if you get a chance?


Still hitting this issue for the repos I tried


This is super cool, my co-founder and I were brainstorming how to essentially expand the context window via first-order concepts for this exact purpose last night

Excited to try it out


Thank you! Let me know how it goes.


Can Greptile read Clojure codebases?


Yes!


Can you add the Linux kernel?


Can you guys add huggingface transformers as one of the public demo repos? I have some very specific use cases where I've seen ChatGPT with GPT4 totally fall on its face Dunning-Kruger style.

I'd like to see if your tech solves those issues.


constantly got Error: Internal error while processing request.


Frontpage of hackernews usually has that effect.


awesome! congrats on the launch




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: