Last year we added CLIP-based image search to https://immich.app/ and even though I have a pretty good understanding of how it works, it still blows my mind damn near every day. It's the closest thing to magic I've ever seen.
Happy immich user here! I once took a cute photo of our baby
chewing on a whisk, and actually finding the correct photo in an unsorted, untagged huge pile of photos by simply searching for "whisk" was a mindblow experience! It is an amazingly powerful tool!
How does it compare to Google Photo search? I search things like 'whisk' with success regularly... though to be fair not as random as whisk, but things like "steering wheel"
I'd also consider adding searching via QR codes. you could search by the content in the QR code (like the url) or if its a URL, search the content on the page of the QR code.
people that have a lot of photos with QR codes would want it :).
You could search your images for qr codes that go to linkedin, ig, or fb pages. or find the qr code wifi passwords.
if you just have a screenshot of a QR code (like you zoomed in and screenshot the ticket's QR code, no other text), then finding the qr code by the event name could be useful.
Thailand requires qr codes that linked to the nutritional information registered online. this could be useful to help you search products that just show the back of the label and not the front of the product.
For anyone in the same boat as my self, I later found out that this is actually very easy to achieve (thanks to ChatGPT). Theoretically, this is how it is done
1. Encode faces, there is a library called face_recognition, that can grab faces from pictures and encode them
2. Group the faces data using `pairwise_distances(encodings, metric='euclidean')`, you only need sklearn library for this
Native app. Doesn't require a network connection (great for privacy).
> Queryable is a Core ML model that runs locally on your device. Leveraging OpenAI CLIP's model encoding technology to connect images and text, you can search your iPhone photo album using any natural language input. Most importantly, it is completely offline, so your album privacy will not be revealed to anyone. And, it is open-source: GitHub
After creating Queryable, I also developed an app called MemeSearch, which searches for memes on Reddit (https://apps.apple.com/us/app/memesearch-reddit-meme-finder/...). Although it's completely free, it hasn't been downloaded by many users. I thought nobody wanted it, so I'm glad to see there are still some people who share a similar taste.
Thanks for Queryable, I use it quite often. As for Reddit meme finder, how do you deal with reddits sudden price increase for its api?
Also, I think you should use another icon from this app because it looks like a goofy side project. It probably is but people would probably not download iPhone apps if the icon doesn’t look professional. (My two cents)
Thank you. It's been over a year since I last maintained it, and I've noticed that the model behaves abnormally on iOS 17 (when searching for 'content', every query results in the same image). I have already fixed this issue in version 1.0.4 and am currently waiting for the review to be approved.
Gives me an idea for a meme search service I can use locally to search through all the images on my computer to find a specific meme (I tend to know I downloaded a funny one and then when I want to share it with someone I can never find it)
Huh, are the image vector embeddings implicitly doing OCR as well? Because it seems like the meme search is pulling from the text as well as images, though it's not entirely clear.
CLIP does not have explicit OCR support, but it does somewhat coincidentally have a slight understanding of text. This is explained by training captions containing (some of) the text that is in the image.
I think the SigLIP models' dataset (WebLi) includes OCRed things too, so they have very good text understanding. I tested a bunch of things for my own meme search engine.
These hacks/side projects are amazing! I feel we will see a lot of creativity as tools to build data intensive AI applications become easier.
We built and open sourced Indexify https://github.com/tensorlakeai/indexify to make it easy to build resilient pipelines to combine data with many different models and transformations to build applications that relies on embedding or any other metadata extracted by models from Videos, Photos and any documents!
I didn’t know about SigClip, the author mentioned on the blog, need to add this to our library :) I also found it incredible that he generated the crawler with Claude! This is the type of boilerplate I hope we don’t have to write in the future
This is awesome! We made similar functionality (plus more) available through an API. If anyone is interested to try it out and share feedback, please message me and I’ll hook you up.
I steered a friend towards Paperless (and away from an LLM solution) as a way of searching/accessing GBs of architectural PDFs recently - so far, it’s apparently working well for them.
I have been playing with it for a while but I miss a conversational interface where I can interrogate the PDF's and summarize them or let's say, find all the main events per year in a corpus of text and build a time-line of said events (context legal case with tons of text data to parse)
Hi @rmdes,
Sagar here from Joyspace AI. I recently made a Show HN post[0] around documents search engine.
We can do this very easily for you. We can provide Search output with context that you can further feed to an LLM for processing to extract events. Let me know if you are interested.
You can get in touch with me at sagar at joyspace dot ai.
We're almost getting back to the .com era of the 2000's with some of these "public cloud" company demos. Enough frenzy, that if your app really starts grinding compute cycles you can quickly DDOS yourself with server costs. Even at $0.001/request [1], if you get 10,000 HN readers who all make 100 requests on average, you suddenly get $1000 server bill from somebody. Those used to be on /. all the time circa 2000.
If few convert, and most just tell their friends to try your cool demo, you can suddenly have 100,000 reddit users making 200+ requests on average every day cause your free demo's so cool. And suddenly you're mostly trying to figure out how to scrounge up server costs to cover the free parts.
Frankly, seems like the entire industry's probably going to have a lot of the same optimizations pretty soon. "How do we stop delivering such enormous JPGs with every Amazon/eBay click?" and similar.
> I imagine that we will see this tech rolled into all the various photo apps shortly.
Yeah, Google's and Apple's Photos both can search for pictures given a description of what you're looking for. In my experience both work very well (e.g. search for "cars" in your pics, and it'll find all your cars over the years if you, like me, take pictures with your cars a lot :) ).