Hacker News new | past | comments | ask | show | jobs | submit login

While strictly speaking true I don't think it is the same argument at all. You are talking about a restriction of the runtime (much like mkdir argument length or maximum filesystem depth), even though it leaks into the standard because standard people care about physical hardware, not theoretical ones.

The WIDTH and ITER limit being actual constants that are part of the program makes all the difference compared to C pointer limitations that are part of the execution environment.




But C is not Turing-complete, because its numbers (and pointers) are required to have an arbitrary limit which itself must be representable and useable in C. Python, on the other hand, has no such requirement and you could imagine an implementation that could grow indefinitely (well, until it ran out of the physical universe). C is prohibited from doing that, there is always a theoretical limit present. You can set that limit hella high, but it's still there.


It's still not comparable to mkdir and find.

The arbitrary limit in C is not fixed by the code. So if you run out of space on a 32-bit machine with sizeof(size_t) == 4, you can run the same code on a 64-bit machine with sizeof(size_t) == 8. With mkdir and find, you have to change the code to do this.

You can translate any Turing Machine into a single C program, which will behave identically so long as you have enough memory. You cannot do this if you need to change the program when the amount of memory changes.


I'd argue that the process of taking a C program and compiling and running it on ever larger pointer sizes it turing complete, but not a single iteration of this process.


"Python" doesn't exist though. It's silly to claim that Python is more powerful just because it doesn't tell you what it's going to do (run on some limited hardware) and C lets you choose. Reality is fundamentally, necessarily different from theory. Everything real is a mere approximation of theory, and vice versa. Even Turing's machine is limited by all the paper in the universes, unless you ignore reality (which is fine!). It's false precision to say that C in reality with bounded pointers is different from C with unbounded pointers, but Python in reality with secretly bounded pointers is the same as Python with unbounded pointers.


No, it's not a false precision. The C requires the numbers (and memory size) to have an upper limit which it is obliged to tell you. The Python doesn't require such limitations to exist. They will, of course, exist in practice since it's impossible to build a Turing machine but only a sufficiently huge finite-state machine, but that is the practical consideration. In practice, all language implementations are FSMs. But C is a FSM even in theory, unlike Python.

You would have better luck trying to argue that there is a reading of standard that allows for unboundedly huge file streams and that fread()/fwrite()/fseek() then could be used to faithfully implement Turing machine.


I don't think there's real difference between Python and C. With Python you can't make your program use more than 4 GB of address space if you run it with a 32-bit interpreter. You have to swap the interpreter. In C the same goes for the compiler. And yes, you can inspect the upper limit by looking at the width of size_t, but it will be seen differently with 32 and 64-bit compilers although the program will be the same. And you _can_ make program behave differently basing on size_t's width, but you're not required to. It doesn't change that fundamentally Python is no more Turing-complete than C just because you can't do it in Python (that's my assumption, I don't know Python well enough actually).

Maybe it all boils down to how CPUs work, and maybe it's safe to say that the incompleteness comes from the CPU implementation? You can of course argue that Python interpreters are written in C/C++, but of course we can imagine they can be written in assembly.

Edit: after I read some other comments I think I see the point - that indeed the problem is the implementation (on CPU).


> But C is a FSM even in theory, unlike Python

Write a python interpreter in C and it's clear to see why your logic fails. You've reaped your claimed benefits of Python while remaining in C.


Python-the-language can be Turing-complete even if Python-as-actually-implemented is not.


Then C-the-language can be Turing complete, even if C-as-actually-implemented is not. Just implement a python interpreter. (Or you can also just implement bignums in C and use those for your computation)


You can't implement a Python interpreter with access to infinite memory in C as specified. That is the point.


Why not? Whether you're running python code in your C interpreter or just running C code, the same memory restrictions will apply based on your hardware. CPython doesn't place a lower bound on bignums over a non-C based implementation

EDIT: See the GMP library, which states "There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on"[0]

https://gmplib.org/#WHAT


The C specification limits programs to addressing a finite amount of memory, though it can be made arbitrarily large by an implementation. The Python specifications do not imply this though real interpreters do.


> though it can be made arbitrarily large by an implementation

Yes, this is my entire point

Why should I care what the language specification states in a computability theory discussion? There only needs to exist a method to accomplish our goal-Whether the method conforms to specification or not doesn't seem relevant to me.


Would it be fair to say then that "Python" is Turing complete, while CPython/PyPy implementations are not turing complete, because they will always implicitly run up against C's memory limitations, therefore they do have a hard limit. Python itself as a language is turing complete because it does not place realistic limitations on the user like C does?


A C program which doesn’t inspect its pointer representations can conceivably use unlimited memory. For pointer values whose representation can be statically determined to be uninspected by the program, the C implementation can use memory locations outside of the representable address space, and thus make unlimited memory accessible. Functionally there is no need for self-contained C programs to ever inspect their pointer representation.


The difference is very small though, you could say that WIDTH amd ITER must be defined in the execution enviroment (shell)before execution and that the rest of the code is the program, and we are at the same situation as in C.


If you define WIDTH and ITER prior to execution you are just giving arguments to your program.

Maybe using the term "environment" was not the best choice; what I mean is that WIDTH and ITER are program variables that impact program behavior and output (appear in regexes etc.) whereas (most) C programs don't actually reference or depend on the pointer width (other than crashing if it's too small); it is an internal detail of the C compiler and underlying hardware that only happens to be visible to the programmer due to leaky abstractions. I don't think those are comparable.


I mean... isn't this just `sizeof` in c?

Honestly, I'm struggling to think of any real world code base I've worked with in C that didn't care about pointer size.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: