Hashes are fine, but to say that "vectors are over" is just plain nonsense. We continue to see vectors as a core part of production systems for entity representation and recommendation (example: https://slack.engineering/recommend-api) and within models themselves (example: multimodal and diffusion models). For folks into metrics, we're building a vector database specifically for storing, indexing, and searching across massive quantities of vectors (https://github.com/milvus-io/milvus), and we've seen close to exponential growth in terms of total downloads.
True. The title is just clickbait and what we find inside is suggestions for dimensionality reduction by a person who appears to be on the verge of reinventing autoencoders disguised as neural hashes. Is it a mere coincidence that the article fails to mention autoencoders?
Click-bait title aside : ^ ), I'd agree. Neural hashes seem to be a promising advancement imo, but I question its impact on the convergence time of AI models. In the pecking order of neural network bottlenecks, I'd imagine it's not terribly expensive to access training data from some database. Rather, hardware considerations for improving parallelism seem to be the biggest hurdle [1].
Yes this is funny to read when (a) embeddings are such a huge leap in reusable machine learning investment and (b) almost nobody is using them yet. On the other hand, neural hashes do look similar to the density tree analysis that is the first step in many of our applications of language embeddings. It makes sense to me that some of this might be incorporated into vector dbs in the near future. Do you have plans to?
For searching on faces I also needed to find vectors in a database.
I used random projection hashing to increase the search speed because you can just match directly (or at least narrow down search) instead of calculating the euclidean distance for each row.
The demand for vector embedding models (like those released by OpenAI, Cohere, HuggingFace, etc) and vector databases (like https://pinecone.io -- disclosure: I work there) has only grown since then. The market has decided that vectors are not, in fact, over.
PineCone seems interesting. Is the storage backend open source? I've been working on a persistent hashmap database that's somewhat similar (albeit not done) that should have less RAM requirements than bitcask (ie. larger than RAM keysets)
Although we may open-source parts in the future, currently no part of Pinecone is open-sourced. Instead, there are several proprietary index types available, packaged along with hardware/compute resources into what we call “pods.”
To elaborate on Noe's comment, the article is suggesting the use of LSH where the hashing function is learned by a neural network such that similar vectors correspond to similar hashes via Hamming weight (whilst enforcing some load factor). In effect, a good hash is generated by a neural network. It appears Elastiknn a prioi chooses the hash function? Not sure, not my area of knowledge.
This approach seems feasible tbh. For example, a stock's historical bids/asks probably don't deviate greatly from month to month. That said, the generation of a good hash is dependent on the stock ticker, and a human doesn't have the time to find a good one for every stock at scale.
It is true that HNSW outperforms LSH on recall and throughput, but for some use cases LSH outperforms HNSW. I just deployed this week to prod a new system for short text streaming clustering using LSH. I used algorithms from this crate that I also built
https://github.com/serega/gaoya
HNSW index is slow to construct, so it is best suited for search or recommendation engines where you build the index and serve. For workloads where you continuously mutate the index, like streaming clustering/deduplication LSH outperforms HNSW.
right, but massive scale production use in the google crawler to index the entire internet when that was at the bleeding edge was state of the art before the art was even really recognized as an art.
i don't even think they called it ANN. it was high performance, scalable deduplication. (which is, in fact, just fast/scalable lossy clustering)
collaborative filtering was kind of a cute joke at the time. meanwhile they had lsh, in production, actually deduplicating the internet.
A state vector can represent a point in the state space of floating-point representation, a point in the state space of a hash function, or any other discrete space.
Vectors didn't go anywhere. The article is discussing which function to use to interpret a vector.
Is there a special meaning of 'vector' here that I am missing? Is it so synonymous in the ML context with 'multidimensional floating point state space descriptor' that any other use is not a vector any more?
Keep in mind that this is the same field which uses multidimensional arrays that fail to obey tensor transformation laws (because ML requires the kind of nonlinear structure introduced by functions such as ReLU that requires a preferred basis and cannot be transformed between bases) but insists on calling them tensors.
> But another important goal is inventing new methods, new techniques, and yes, new tricks. In the history of science and technology, the engineering artifacts have almost always preceded the theoretical understanding: the lens and the telescope preceded optics theory, the steam engine preceded thermodynamics, the airplane preceded flight aerodynamics, radio and data communication preceded information theory, the computer preceded computer science.
> The analogy here would be the choice between a 1 second flight to somewhere random in the suburb of your choosing in any city in the world versus a 10 hour trip putting you at the exact house you wanted in the city of your choice.
Wouldn't the first part of the analogy actually be:
A 1 second flight that will probably land at your exact destination, but could potentially land you anywhere on earth?
So my interpretation of the neural hash approach is largely that it is essentially trading a much larger number of very small “neurons” vs a smaller number of floats. Given that I’d be curious about what the total size difference is.
I could see the hash approach at a functional level resulting in different features essentially getting a different number of bit directly, which be approximately equivalent to having a NN with variable precision floats, all in a very hand wavy way.
Eg we could say a NN/NH needs N bits of information to work accurately, in which case you’re trading the format and operations on those Nbits
Sry, non native english speaker here... yes, you're right, but wasn't the topic about progress in building selfhealing-systems by programming in 'blocks', setting routines searching for (that i want to call an vector in terms of a setted goal) problems, and be able to slove the problems ? Or what had we learned by all the 'having crypto' when not programming in 'blocks' -so hashes seem the smallest part of how to begin, not ? puh Hope that it wasn't too non-understandable, regards
Very shallow article. Would like to see a list of mentioned "recent breakthroughs" about using hashes in ML besides the retrieval applications, because this is genuinely interesting
Maybe I'm misunderstanding the guy, but he is effectively calling for lower dimensional mappings from vectors to hashes. That is fine and all, but aren't hashes a single dimension in the way he is describing the use?
I work in this field and I found this article... very difficult to follow. More technical description would be helpful so I can pattern match to my existing knowledge.
Vectors are just getting started.