In practice, many programs ignore the fact that malloc can return NULL. As do some OSes and their implementations of malloc, if they support/enable/require overcommit. These are perhaps operating under "the illusion of infinite memory" (in the GC sense): free, in this context, is simply a way of marking data as invalid and no longer to be referenced - a method of poisoning data for debug purposes.
But of course, I've had malloc return NULL - very finite.
The specification allows for this, yes. However, on some platforms (including linux glibc by default, I believe), malloc() never fails, but allocates virtual memory optimistically; the first you hear of an out of memory condition is when the system slows down due to paging, and the next thing you notice is when the OOM killer nixes a process.
Of course, other platforms, especially embedded ones, behave differently.
Depending on VM_OVERCOMMIT_MEMORY, Linux might give out address space well beyond the size of the pagefile, hoping many of those pages are never written to (e.g., most threads never get anywhere near the bottom of their default stacks).
Challenge: Write a Turing complete machine using a finite number of registers and a state machine. One or all of the registers can contain a rational number of unlimited precision. (This has already been done, so if you are aware of the existing machines, you have to create a new one.)
No, it's possible. The key insight is that a "rational number of unlimited precision" can be used to store an arbitrarily large amount of data; it can emulate a Turing machine's unlimited working tape.