Look at that video player, apologizing profusely because it doesn’t know why it
can’t load. Sadly, it’s the most relatable thing on your webpage. Your site is
like a desperate dating profile: "I swear I'm different!
I'm using this for the AI annotations at https://pagewatch.ai/
Its strange how much different it makes the whole page feel from regular boxes, much more of a tangible/informal feeling. Felt perfect to go with the type of feedback LLM's give.
I tried signing up and scanning apkmirror.com but it immediately came up with a 403. Perhaps your user agent is blocked, you should probably use a custom user agent.
It is cloudflare that blocked us, we scrape using a regular chrome instance (with the latest user-agent), but cloudflare are sometimes very aggressive in their blocking.
I'm busy adding a proxy option to handle these cases automatically, and some way to whitelist our scraper.
Some integrations are connected at the org level (one account for the entire company), and others at the user level (many accounts per company). For the latter case, the pricing per connection is lower.
Agreed. I'm currently saving a data field for 'product category' but this is defined by the store owner and currently not used for search. Trying to figure out the most reliable way to categorize products, to then make it available as a filter to narrow down the selection.
Additionally, the search for "snowboard" vs "snowboards" returns back different results which isn't ideal: since the user intent is the same. This is something I'm hoping to resolve with AI-powered search.
In the footer, I changed my location to be South Africa and a few other countries to try the same search to see what products come up. Thanks for the feedback and heads up on this!
Seems surprising ok for coding related queries ('celery rate limit'), I'm curious about their scraping setup, building that out must be quite a big task.
This is pretty cool, it is able to parse data out of a random pricing table somewhere in the page.
It does seem to just make up data it if is not found in the page (probably expected with LLM's), I wonder if you can reduce that with some prompting, or maybe verify the data is actually present?
Your schema page docs is broken https://singleapi.co/docs/schema
The prompt leakage is a pretty common issue that I still have to address, but ideally, it should just return empty fields for data that it couldn't find on the page.
phind is awesome, a few times it has helped me where GPT-4 could not due to its knowledge cutoff.
some minor feedback on the UI, it doesn't always scroll to the bottom if it is generating code, and I have to manually scroll it into view
Surprising how well I can see the image even after putting the separation to 64. The image is just a few thin strips yet I can easily see the whole thing if I follow the scrolling, really cool.