Assuming you ignore or amortize the time necessary to create the table in the first place, of course.
This is the basis for rainbow tables: precomputed tables for mapping hashes to passwords, with a space-saving “hash chaining” trick to effect a constant factor reduction in table size. Such tables are the reason why passwords must be hashed with a unique salt when stored in a database.
I’m not sure this definition of Big-O for space complexity is universal. When I’ve seen/used it, the size of the initial data wasn’t relevant, it was more about the additional memory required for the algorithm.
For example, an in-place algorithm like Bubble Sort would have a O(1) space complexity, because it requires no extra memory (and 0 memory is a constant). Merge sort on the other hand is O(n) because it always uses additional memory for its intermediate stages, and that additional memory scales with n.
That’s not quite accurate. Big O notation and Theta notation are different ways of expressing the growth rates of functions - the choice of using Big O or Theta is independent of whether you’re trying to specify total space complexity or auxiliary space complexity.
Saying something is O(n) tells you it grows at most linearly, but this would also admit e.g. log n.
Saying something is Theta(n) tells you it grows exactly linearly: that is, it is not a slower growth rate like log n, nor a faster growth rate like n^2.
n is some value depending on the size of the input, so if you have a look up table that is O(n), then that memory needs to be initialized somehow. If you have a fixed size lookup table then it is O(1), even if it is big.
What if you allocate a huge chunk of memory and only use a small part of it? For example, checking if a list of numbers contains duplicates using a boolean array.
If your algorithm requires O(n) memory, any O(1) amount of memory can never be enough, no matter how huge. That's the entire point of O notation.
And if your implementation of an algorithm allocates more space in the big-oh sense than it can actually touch (eg. O(n) space for O(log n) time or whatever), that's just a wasteful implementation. Doesn't make the algorithm itself require more space than it has time to actually use.
Very. For instance if you look at sorting algorithms on wikipedia they pretty much all list performance (best, worst, average) but also worst-case space complexity, in O notation.
...oops