Hacker News new | past | comments | ask | show | jobs | submit login

These theses are technically very complicated, but what they do is actually easy to explain: they construct a short "program" that basically enumerates all possible statements and checks their validity. And then terminates if it detects a contradiction.

A program that checks the axioms of Peano arithmetic, or of ZF set theory is undecidable, we know that from Gödel's incompleteness theorems.

So then it becomes a sort of competition: who can write the shortest Turing machine that encodes that program. Kinda like a demo-scene for mathematicians.




But let's note that "undecidable" has a specific meaning here, that batches of logic axioms can't prove their own consistency. There's still a possibility of using more powerful logic to prove that one of those will never cause a contradiction.

In particular, ZF can't prove itself, but it can prove that Peano is consistent.

These are very interesting machines, but mostly for reasons that go beyond the normal meaning of the word "undecidable". We have lots of unsolved math problems that can be encoded into vastly shorter turing machines.


> But let's note that "undecidable" has a specific meaning here, that batches of logic axioms can't prove their own consistency

It's not actually that specific. There are many statements that we know are unprovable in ZF. ZFs own consistency is just one of them. The original work to find the 8000 state TM independent of ZF encodes an entirely different statement about graphs.

We are talking about finding TMs which encode a statement which is unprovable in say ZF. Specifically, the TM loops forever iff the statement (which is unprovable in ZF) is true. Not necessarily looking for inconsistencies Godel-style


That's an interesting elaboration. It's also true that we can go beyond undecidability with an oracle, but that introduces a new "undecidable" barrier. Is there a relation between the tower of more powerful logics and the tower of more powerful oracles?


AFAIK yes. see the Large Cardinal Axioms (and combinatorial statements!)

""" The proposition that such cardinals exist cannot be proved in the most common axiomatization of set theory, namely ZFC, and such propositions can be viewed as ways of measuring how "much", beyond ZFC, one needs to assume to be able to prove certain desired results. In other words, they can be seen, in Dana Scott's phrase, as quantifying the fact "that if you want more you have to assume more" """

https://en.wikipedia.org/wiki/Large_cardinal

And https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof for a similar theme with Peano arithmetic and the epsilon_nought ordinal.


> In particular, ZF can't prove itself, but it can prove that Peano is consistent.

Consistent within Peano, or consistent within ZF? That is to say, could another large system of axioms exist which fully contains the Peano axioms, but in which the Peano axioms are inconsistent (in a non-trivial manner)?

I really don't know much about formal logical consistency, but am mildly curious as I see these sorts of claims mentioned.


Consistent means that it doesn't contradict itself. Peano is consistent within Peano. The system that checks if Peano is consistent doesn't have to be similar at all or share any axioms.


The technical answer to your question is yes, another axiomatic system can exist that is able to prove that Peano is inconsistent.

However, assuming that ZF is consistent, then that other axiomatic system would have to be inconsistent.


I recently learned that the physical world is also undecidable. For example, the spectral gap problem in quantum mechanics is one such problem. Scientists have proven that no matter how perfectly and completely we can mathematically describe a material on the microscopic level, we're never going to be able to predict its macroscopic behavior.

This has profound implications, such as for the possibility of a theory of everything. This does not necessarily mean that a theory of everything is impossible, but it does indicate that such a theory might not be able to provide a complete description or prediction of all physical phenomena, which is contrary to the belief of some physicists.


There is of course a catch, since physics can be simulated (with an exponential slowdown) by a Turing machine.

The catch is that "macroscopic behavior" in this context includes things that are not really macroscopically observable, or require conditions that cannot easily be specified. For the spectral gap, I think the way this manifests is that you can't ever be certain that you're in the ground state, but I'm not sure this is correct.


> since physics can be simulated

I must object. Our current models of physics can be simulated. But we cannot model what we do not understand, such as quantum gravity. And even then, we don't even have that many decimal places for certain bits of our models.


By definition the universe can “simulate” itself. Although it’s true we have no proof that this doesn’t go beyond Turing machine in power (yet it’s a reasonable assumption).

Of course the goal of physics is to compress the universe into simpler laws. But there’s no inherent reason that it should be possible.


> By definition the universe can “simulate” itself.

A 1:1 scale map is still just a map. Simulation isn't an identity operation.

> yet it’s a reasonable assumption

What are the reasons for this claim?


>Scientists have proven that no matter how perfectly and completely we can mathematically describe a material on the microscopic level, we're never going to be able to predict its macroscopic behavior

Surely this has more nuance, as it is some ways we can falsify the statement as is


Yes there absolutely is nuance. The mathematics used to determine whether a quantum system has a gap or not is indeed undecidable, but that undecidability only kicks in as the size of your system approaches infinity.

For any finitely sized system, basically any actual system in the universe, the solution is computable/decidable.


People might like this lecture by High Woodin. The message/gist I got from lecture is that infinity is needed to support the finite.

https://youtu.be/1STrev8KcoM


This is the correct video : https://www.youtube.com/watch?v=_FOnZv-R94E (The way I recognize this video is when Hugh Woodin says "Don't worry, I am not going crazy")


This is a great summary. BB is thought provoking because it shows that a simple computation completely destroys any possible conception of mathematics. Probably not very useful but still very interesting.


> completely destroys any possible conception of mathematics

Hyperbolic, to put it gently. It changes our conception of mathematics to a different conception.


But programming is math[1], so is the conclusion that math will eat itself?

[1]: as many argue in the software patent cases


Math is mostly about proofs, and many behaviors of computation are simply not provable, but are still useful. Of course it's possible to use models, formal systems, simulation, formulas, and other tools of mathematics, but arguably there's a clear symmetry breaking. (But it's not really surprising, as most numbers are unnameable and uncomputable. So many things in math are like Turing machines, just they are not that useful.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: