But C is not Turing-complete, because its numbers (and pointers) are required to have an arbitrary limit which itself must be representable and useable in C. Python, on the other hand, has no such requirement and you could imagine an implementation that could grow indefinitely (well, until it ran out of the physical universe). C is prohibited from doing that, there is always a theoretical limit present. You can set that limit hella high, but it's still there.
The arbitrary limit in C is not fixed by the code. So if you run out of space on a 32-bit machine with sizeof(size_t) == 4, you can run the same code on a 64-bit machine with sizeof(size_t) == 8. With mkdir and find, you have to change the code to do this.
You can translate any Turing Machine into a single C program, which will behave identically so long as you have enough memory. You cannot do this if you need to change the program when the amount of memory changes.
I'd argue that the process of taking a C program and compiling and running it on ever larger pointer sizes it turing complete, but not a single iteration of this process.
"Python" doesn't exist though. It's silly to claim that Python is more powerful just because it doesn't tell you what it's going to do (run on some limited hardware) and C lets you choose. Reality is fundamentally, necessarily different from theory. Everything real is a mere approximation of theory, and vice versa. Even Turing's machine is limited by all the paper in the universes, unless you ignore reality (which is fine!).
It's false precision to say that C in reality with bounded pointers is different from C with unbounded pointers, but Python in reality with secretly bounded pointers is the same as Python with unbounded pointers.
No, it's not a false precision. The C requires the numbers (and memory size) to have an upper limit which it is obliged to tell you. The Python doesn't require such limitations to exist. They will, of course, exist in practice since it's impossible to build a Turing machine but only a sufficiently huge finite-state machine, but that is the practical consideration. In practice, all language implementations are FSMs. But C is a FSM even in theory, unlike Python.
You would have better luck trying to argue that there is a reading of standard that allows for unboundedly huge file streams and that fread()/fwrite()/fseek() then could be used to faithfully implement Turing machine.
I don't think there's real difference between Python and C.
With Python you can't make your program use more than 4 GB of address space if you run it with a 32-bit interpreter. You have to swap the interpreter. In C the same goes for the compiler. And yes, you can inspect the upper limit by looking at the width of size_t, but it will be seen differently with 32 and 64-bit compilers although the program will be the same. And you _can_ make program behave differently basing on size_t's width, but you're not required to. It doesn't change that fundamentally Python is no more Turing-complete than C just because you can't do it in Python (that's my assumption, I don't know Python well enough actually).
Maybe it all boils down to how CPUs work, and maybe it's safe to say that the incompleteness comes from the CPU implementation? You can of course argue that Python interpreters are written in C/C++, but of course we can imagine they can be written in assembly.
Edit: after I read some other comments I think I see the point - that indeed the problem is the implementation (on CPU).
Then C-the-language can be Turing complete, even if C-as-actually-implemented is not. Just implement a python interpreter. (Or you can also just implement bignums in C and use those for your computation)
Why not? Whether you're running python code in your C interpreter or just running C code, the same memory restrictions will apply based on your hardware. CPython doesn't place a lower bound on bignums over a non-C based implementation
EDIT: See the GMP library, which states "There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on"[0]
The C specification limits programs to addressing a finite amount of memory, though it can be made arbitrarily large by an implementation. The Python specifications do not imply this though real interpreters do.
> though it can be made arbitrarily large by an implementation
Yes, this is my entire point
Why should I care what the language specification states in a computability theory discussion? There only needs to exist a method to accomplish our goal-Whether the method conforms to specification or not doesn't seem relevant to me.
Would it be fair to say then that "Python" is Turing complete, while CPython/PyPy implementations are not turing complete, because they will always implicitly run up against C's memory limitations, therefore they do have a hard limit. Python itself as a language is turing complete because it does not place realistic limitations on the user like C does?
A C program which doesn’t inspect its pointer representations can conceivably use unlimited memory. For pointer values whose representation can be statically determined to be uninspected by the program, the C implementation can use memory locations outside of the representable address space, and thus make unlimited memory accessible. Functionally there is no need for self-contained C programs to ever inspect their pointer representation.