Hacker News new | past | comments | ask | show | jobs | submit login

Semi-related: I guess some companies have already started augmenting their search with the use of LLMs. If you've worked on a similar project, what is your (very) high-level architecture? Did you see a noticeable difference in relevance and query time?



I haven't used LLMs but am using a hybrid approach with pg_vector: https://github.com/agoodway/vecto (some useful links in the README).

One way that comes to mind is using LLMs is to re-write the query before shoving it into the search engine (known as "query rewriting").

More full-text ideas here: https://gist.github.com/cpursley/c8fb81fe8a7e5df038158bdfe0f...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: