Is there any easy way to run the embedding logic locally? Maybe even locally to the database? My understanding is that they’re hitting OpenAI’s API to get the embedding for each search query and then storing that in the database. I wouldn’t want my search function to be dependent on OpenAI if I could help it.
If you are building using Supabase stack (Postgres as DB with pgVector), we just released a built-in embedding generation API yesterday. This works both locally (in CPUs) and you can deploy it without any modifications.
There are no extra costs other than the what we'd normally charge for Edge Function invocations (you get up to 500K in the free plan and 2M in the Pro plan)
neat! one thing i’d really love tooling for: supporting multi user apps where each has their own siloed data and embeddings. i find myself having to set up databases from scratch for all my clients, which results in a lot of repetitive work. i’d love to have the ability one day to easily add users to the same db and let them get to embedding without having to have any knowledge going in
You can generate CLIP embeddings locally on the DB server via:
SELECT abstract,
introduction,
figure1,
clip_text(abstract) AS abstract_ai,
clip_text(introduction) AS introduction_ai,
clip_image(figure1) AS figure1_ai
INTO papers_augmented
FROM papers;
Then you can search for embeddings via:
SELECT abstract, introduction FROM papers_augmented ORDER BY clip_text(query) <=> abstract_ai LIMIT 10;
The approach significantly decreases search latency and results in cleaner code.
As an added bonus, EXPLAIN ANALYZE can now tell percentage of time spent in embedding generation vs search.
The linked library enables embedding generation for a dozen open source models and proprietary APIs (list here: <https://lantern.dev/docs/develop/generate>, and adding new ones is really easy.
I have tried CLIP on my personal photo album collection and it worked really well there - I could write detailed scene descriptions of past road trips, and the photos I had in mind would pop up. Probably the model is better for everyday photos than for icons
Yes, I use fastembed-rs[1] in a project I'm working on and it runs flawlessly. You can store the embeddings in any boring database (it's just an array of f32s at the end of the day). But for fast vector math (which you need for similarity search), a vector database is recommended, e.g. the pgvector[2] postgres extension.
I literally just published my first crate: candle_embed[1]
It uses Candle under the hood (the crate is more of a user friendly wrapper) and lets you use any model on HF like the new SoTA model from Snowflake[2].
This is a good call out. OpenAI embeddings were simple to stand up, pretty good, cheap at this scale, and accessible to everyone. I think that makes them a good starting point for many people. That said, they're closed-source, and there are open-source embeddings you can run on your infrastructure to reduce external dependencies.
If you're building an iOS app, I've had success storing vectors in coredata and using a tiny coreml model that runs on device for embedding and then doing cosine similarity.