Hacker News new | past | comments | ask | show | jobs | submit login

My problem with this is that it doesn't explain a lot.

You can manually make a vector of a word and then step wise get up to word2vec approach and then document embedding. My post[1] does some of the first part and this great word2vec post[2] dives into it in more detail.

[1] https://earthly.dev/blog/cosine_similarity_text_embeddings/

[2] https://jalammar.github.io/illustrated-word2vec/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: