Hacker News new | past | comments | ask | show | jobs | submit login

> Though you and the article both are perpetuating the lie that hash table operations are O(1).

Hash table operations are O(1) on average, assuming a uniform hash function, and assuming that the number of buckets is kept greater than the number of elements. More info here: http://en.wikipedia.org/wiki/Hash_table#Performance_analysis

Computing the hash function itself is not usually factored into this analysis, probably because it's an orthogonal concern (ie. hash functions and hash table implementations can be mixed and matched), and because computing the hash function for every operation is not strictly required (the hash of the key can be cached).

But even if you are factoring in computation of the hash function, this is a strange argument:

> Hint: distributing into O(n) buckets requires looking at O(log n) bits.

Computers can look at O(log n) bits in a single operation for any "n" that can reasonably be held in memory.

For example, if your hash table has less than 4B entries, "log n" is less than 32. Any 32 bit computer can "look at" 32 bits in a single operation. So I can't see why you think that "log n" needs to be a factor in the complexity analysis.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: