Hacker News new | past | comments | ask | show | jobs | submit login

A classic. Every time I see that link I read the whole thing starting from the new beginning.

...oops






I opened the link and just started reading. I have a really dumb question that may expose common knowledge I don’t have, about this quote:

> The total amount of space needed to represent this collection of strings is O(k n^2).

I haven’t seen O-notation ever represent ram usage, just algorithm complexity. Is this common?


Yes, very common. You've seen "time complexity"; it's very common to talk about "space complexity" as well.

Fun bonus: they can be interchangeable, e.g. increasing space to reduce time.

And any operation that takes n bits as input can be trivially turned into an O(1) time and O(2^n) space algorithm through tabulation.

Assuming you ignore or amortize the time necessary to create the table in the first place, of course.

This is the basis for rainbow tables: precomputed tables for mapping hashes to passwords, with a space-saving “hash chaining” trick to effect a constant factor reduction in table size. Such tables are the reason why passwords must be hashed with a unique salt when stored in a database.


Yes, but total time is never going to be less than total space, when expressed in big-O notation

I’m not sure this definition of Big-O for space complexity is universal. When I’ve seen/used it, the size of the initial data wasn’t relevant, it was more about the additional memory required for the algorithm.

For example, an in-place algorithm like Bubble Sort would have a O(1) space complexity, because it requires no extra memory (and 0 memory is a constant). Merge sort on the other hand is O(n) because it always uses additional memory for its intermediate stages, and that additional memory scales with n.

Doing a quick google, the first few sites I find seem to use a similar understanding https://www.geeksforgeeks.org/time-and-space-complexity-anal...


The confusion is around space complexity vs auxiliary space complexity (extra space).

space complexity is O(n) but auxiliary space complexity uses Theta for notation instead.

But people aren't too picky on the notation and usually say something like "O(1) extra space" instead of using theta.

https://en.m.wikipedia.org/wiki/Space_complexity


That’s not quite accurate. Big O notation and Theta notation are different ways of expressing the growth rates of functions - the choice of using Big O or Theta is independent of whether you’re trying to specify total space complexity or auxiliary space complexity.

Saying something is O(n) tells you it grows at most linearly, but this would also admit e.g. log n.

Saying something is Theta(n) tells you it grows exactly linearly: that is, it is not a slower growth rate like log n, nor a faster growth rate like n^2.


Yeah, you are correct I misinterpreted the article.

But yeah I guess space complexity vs auxiliary space complexity is just a bit ambigous.


Why couldn’t it?

Because it takes n time to access n units of memory.

Heavily simplified due to caches etc. To the point where people sometimes measure in cache misses instead as that is usually what actually matters.


Let's say you have a lookup table and your algorithm looks up a value and returns it. O(n) space O(1) in time.

n is some value depending on the size of the input, so if you have a look up table that is O(n), then that memory needs to be initialized somehow. If you have a fixed size lookup table then it is O(1), even if it is big.

Compile time fill a lookup table. Run time read 1 value from it. I call it O(1) in time, O(n) in memory. You call it O(1) in both, right?

Now move the compile time code to runtime.

I call it O(n) in time and memory. What do you call it?

If your say O(n) in time and memory - why is moving the code changing its complexity?

If you say still O(1) - then everything is O(1), thus proving P=NP among other things :)

Either way your interpretation isn't very useful.


What if you allocate a huge chunk of memory and only use a small part of it? For example, checking if a list of numbers contains duplicates using a boolean array.

If your algorithm requires O(n) memory, any O(1) amount of memory can never be enough, no matter how huge. That's the entire point of O notation.

And if your implementation of an algorithm allocates more space in the big-oh sense than it can actually touch (eg. O(n) space for O(log n) time or whatever), that's just a wasteful implementation. Doesn't make the algorithm itself require more space than it has time to actually use.


> Is this common?

Very. For instance if you look at sorting algorithms on wikipedia they pretty much all list performance (best, worst, average) but also worst-case space complexity, in O notation.


Is there a way to do that without signing in?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: