You can generate "text embeddings" (ie vectors) from your text with an embedding model. Once you have your text represented as vectors, then "closest vector" means "more semantically similar", as defined by the embedding model you use. This can allow you to build semantic search engines, recommendations, and classifiers on top of your text and embeddings. It's kindof like a fancier and fuzzier keyword search.
I wouldn't completely replace keyword search with vector search, but it's a great addition that essentially lets you perform calculations on text
Nice explanation. One use case where keywords haven't worked well for me , and (at least at first glance) vectors are doing better are longer passages -- finding sentences that are similar rather than just words.
I wouldn't completely replace keyword search with vector search, but it's a great addition that essentially lets you perform calculations on text