Hacker News new | past | comments | ask | show | jobs | submit login

Ok, I'm not a mathematician (or a physicist), but this seems... dubious... Like... what?:

> Moreover, a better terminology for the so-called real numbers is "random numbers", as their series of bits are truly random

0.5 is a real number. Its series of bits is not random at all. sqrt(2) is also a real number, and its digits looks sort-of random, but they're not really: they're exactly the sequence of digits that forms a number that, when squared, equals 2. You can easily develop very simple formulas or programs to generate them, so in a "Kolmogorov complexity" sense, it's not random at all. Point is: the concept of "having random digits" is in NO way intrinsic to "being a real number". I guess what he meant to say was "non-computable numbers"? So he's saying "in a finite volume, there can't be non-computable numbers"... or something?

Again: not a mathematician or a physicist, but this comes off as pretty crank:y to me. I would love to be corrected.




The author alludes to the fact that the overwhelming majority of reals have an infinite number of decimal places that constitute an algorithmically random sequence[1]

Sure, plenty of real numbers have finite (0.1, 0.01, ...) or non-random (0.111..., 0.0111..., ...) sequences of decimal places, but there are only a countable infinity of these, whereas there is an uncountable infinity of the other, "random," kind.

Usually we almost exclusively deal in the only-countably-infinite subset of the reals which are not random and it's counterintuitive that these comprise a tiny tiny archipelago of order in a huge engulfing ocean of randomness, but such is the case.

Gregory Chaitin's "How real are real numbers?"[2] has some relatively straightforward further reading on this if you're interested.

1: https://en.wikipedia.org/wiki/Algorithmically_random_sequenc...

2: https://arxiv.org/abs/math/0411418


The author's writing has a lot of background behind it that is true and yet the author's argument that they should be called "random numbers" is poor, at best. It is not typically helpful to think of real numbers that way - at the very least you're "missing" a tiny but hugely important subset that aren't random in any sense. But more importantly, it doesn't match up with any of the usual definitions of real numbers since those involve no such restrictions of randomness of digit expansions (and generally avoid digit expansions entirely). We might as well argue that the real numbers aught to be called the non-zero numbers since almost every one of them is non-zero.

I don't think the author seems particularly aware of the existence of such arguments in the past (I assume he comes from a physics background and is new to this - though so am I). Glancing through the works cited I don't see references to past treatments of this idea like Chaitin's. If the author is not a crank, he is doing a poor job of convincing us of it.


Chaitin is a serious scientist/mathematician who knows what he is talking about, and there is plenty of discussion to other works and references in his own research. And his published work on algorithmic information theory is quite extensive.

He is using the notion of randomness as "uncompressibility", which is a very mainstream view (perhaps better known as Kolmogorov complexity, who is much more well known than Chaitin).

Just because you are not convinced doesn't mean he's a crank, it just means your background in this area is limited.

I would also note that you're reading a popular text, one that is purposefully designed to reach a mass audience, so the lack of formalism is likewise intentional, and not a reflection of crankism. You're welcome to look up his research papers if you'd like to see the difference.


I think you misread my comment which was not at all commenting on Chaitin! I was commenting on the article by Gisin, the one this thread is on.


He's a very reputable theoretical quantum physicist with several respectable results to his name. I know because my PhD was in a similar field. You could perhaps accuse him of ignorance of results in other specialities, but not crankism.


Not necessarily. Linus Pauling had several respectable results to his name, too. Even the smartest people can be so wrong when working in fields even a little bit outside their specialism that they are almost indistinguishable from cranks.


This IS his field of research.


I only accused the author of doing a bad job of not looking like a crank. I realize that non-cranks do that sometimes.


I don't know if it's still the case, but arxiv papers without a subject classifier used to be more likely to be "cranky" (contrast the /math/ in the Chaitin paper url.)

So I don't know about the paper here - I only read the abstract - but I wouldn't get too hung up on terminology. "Random" is just an easier way of saying "uncomputable" I guess. But the broad topic itself is quite respectable and of interest to non-cranks, for example, see John Baez' "Struggles with the Continuum"[1]:

> Is spacetime really a continuum, with points being described — at least locally — by lists of real numbers? Or is this description, though immensely successful so far, just an approximation that breaks down at short distances? Rather than trying to answer this hard question, let us look back at the struggles with the continuum that mathematicians and physicists have had so far....

1: http://math.ucr.edu/home/baez/continuum.pdf

EDIT: The author seems like a perfectly respectable, non-crank physicist (https://en.wikipedia.org/wiki/Nicolas_Gisin) as per another comment here.


That difference is due to a systematic change in how arXiv generates URLs for submissions, starting a few years ago. No arXiv submissions today will have a URL with a category. Articles (each with a unique arXiv id YYMM.xxxxx) can have multiple topic tags; the article under discussion has two tags, one of which is "quant-ph". Cranky articles which cannot be refuted on technical grounds are often found on the "general" topic tags.

As an aside, since the above comment is essentially trying to infer the quality of the paper from metadata, it might be worth knowing that the author is a well-regarded quantum physicist with many highly cited papers.


Nitpick: non-random numbers are rare (this word is usually formalized as "the set of all non-random numbers has measure zero"), but still uncountable. The set of computable sequences is countable, but it is only a tiny subset of non-random sequences. For example, 0.0x₁0x₂… is non-random (= partially predictable) for any sequence x₁x₂…, and this set is obviously non-countable.


I think sets like 0.0x₁0x₂… ought to be considered "mixed random", not "non-random", since they are homomorphic to a superset of random numbers. (My terminology is hand-wavy.)


Is this a difference between random and algorithmically random? I'd have expected "not computable" to be synonymous with "algorithmically random."


Could you show us one of these "random" reals?

There is a simple algorithm for generating every possible decimal number:

  x.0
  x.1
  x.2
  ...
  x.01
  x.02
  ...
  x.10
  x.11
  x.12
  ...
  x.20
  x.21
  ...
  x.30
  x.31
  ...
  x.99
  x.001
  x.002
  ...
In binary, it's even simpler; you just generate every permutation of bits.


Your proposed enumeration of decimals (counter-intuitively) would miss an uncountably infinite number of values. This enumeration is not possible in any fashion, which you can prove fairly easily with Cantor's Diagonal Argument[1].

The punch-line is that there are more real numbers between 0 and 1 than there are whole numbers greater than 0. The proof basically shows that even if you imagine that you had a countably-infinitely-long list of all the real numbers between 0 and 1, you can construct a number that must be missing from the list -- which is a contradiction, and thus no such list can exist (implying that there are more numbers between 0 and 1 than numbers greater than 0).

For instance, your enumeration is missing every irrational (or even recurring decimal) number. That set of numbers is uncountably infinite.

[1]: https://en.wikipedia.org/wiki/Cantor's_diagonal_argument


>The proof basically shows that even if you imagine that you had a countably-infinitely-long list of all the real numbers between 0 and 1, you can construct a number that must be missing from the list -- which is a contradiction, and thus no such list can exist (implying that there are more numbers between 0 and 1 than numbers greater than 0).

So, there is no list of all the real numbers between 0 and 1, but you are claiming there are uncountably infinite of them?


That's the definition of uncountably infinite: can not be put in 1-to-1 correspondence with the integers (can not be listed).


Yes, because if there were such a list then you could assign a 1-to-1 mapping (by list index) of each number between 0 and 1 to a whole number which would prove they have the same size. If it is impossible to have a complete list of all real numbers between 0 and 1 then there cannot be such a mapping, and thus you'll "run out" of integers before you've counted all of the reals (more correctly you're showing that the size must be larger because if it was smaller or of equal size it could fit in such a list).

Numberphile did a pretty good introductory video to this if you want to learn more: https://www.youtube.com/watch?v=elvOZm0d4H0.


That is pretty much the definition of uncountability :)


It's counter-intuitive, but your algorithm actually only generate a subset of the rational numbers. That's because all of your numbers have a finite number of decimal digits. Most reals, like 1/3 or pi, will never appear in your list. (Though numbers close to them will.)

Most reals don't have a generating algorithm, since there is an uncountable number of reals, but only a countable number of algorithms. (Assuming an algorithm has to be a finite length program.)


I'm not sure 1/3 is exactly a number. It seems to be an expression when written out. pi, likewise, appears to be a word, not a number.


Numbers aren't lists of digits. They're abstract concepts that are referred to by lists of digits for the purpose of convenience. You can even have numbers without having invented digits. It's just annoying to write out one million, two hundred thirty-four thousand, five hundred sixty-seven and eighty-nine hundredths. So, we say 1,234,567.89 (or 1.234.567,89 for countries that use the comma for decimals).

0.5 in base 10 is the same as 0.1 in base 2, which is the same as 0.11111... in base 3. And the choice of base is completely arbitrary, with some societies using others, like base 12 or base 60.


That gives every terminating decimal or binary number. For example, you will never list 1/3, or pi or sqrt(2).

To see why you cannot generate every possible real number as a list, look up cantors diagonal argument.


Are you sure 1/3 is a real number?


Real numbers are defined as a particular extension of the rational numbers, which 1/3 is.


"Real numbers" here refers to a very specific construction of numbers. There is pretty good argument, which the actual post is about, that states this construction is actually rather unrealistic.

The issue being that there are way to many "real numbers".


It's as "real" as 1/2, which is on your list.


I guess you also need to interpret each number in the list in each possible base.


I'll be generous and assume you mean decimal as in the data-type: an interesting set of "floating point" numbers that behaves really badly (not a field, for example)


I'm a mathematician by training (though this is thoroughly outside the scope of a mathematician's job) and completely agree. That said, there's a real topic in the philosophy of mathematics called "finitism" that argues we should only consider finite mathematical objects and would exclude almost every real number. [1]

[1] https://en.wikipedia.org/wiki/Finitism


Almost every is a very understated way to put that.


“Almost every” is a term of art in mathematics. It means basically “100% wherever you look, but not really 100%”

https://en.m.wikipedia.org/wiki/Almost_everywhere


Isn't it the opposite, more like 100% everywhere you don't look, and nowhere you do? I think the really real numbers are the countable subset of them which are in some way expressible, and the uncountable subset which are in no way expressible except in vague generalities like "they're not shaved by the barber" are in a sense less real.


I like your definition of "look".

I was going with where a dart* would land (100% of the time it will "really" hit a point that can't be expressed using computable numbers), but you're using it to mean anything we can compute or express. That is, if we're looking somewhere, it has to be somewhere we can say that we're looking, and therefore expressible. If we take your meaning, then yeah, that's what I mean as well :)

* (Yes, the dart is an abstract one. No comments about the Planck length, please)


The example with the dart plays into the «random numbers» notion, i.e. you have to use some kind of random process to get hold (non-constructively) of such a number.


Expressed as a percentage, it would be 100%, though ;)


Well, almost...


Nope, exactly. The measure of the interval [0, 1] excluding any countable subset (for example all rational numbers) is still exactly 1, that's usually what mathematicians mean by "almost every", here "almost every real number in [0, 1] is irrational" :)


I was joking, but to a point.

The problem is that "100%" =/= "All". But most of the time, it is considered to be equivalent. The "almost all" phrase is used instead of "100%" precisely because dividing by infinity doesn't yield a useful result in some contexts.

In more... "conventional" cases, if I tell you that 100% of the items are blue, you can conclude that none of them are red. With the intermingling of countable and uncountable sets, this statement fails. You have a situation where 100% of the items in the superset are blue, and there are also some red ones.

So it's "almost 100%" in the same way it's "almost all". I'm using the word "almost" to mean "infinitesimally close to"


Really, exactly 100%.


Yeah. But “100%”, and “all” or “every”, are considered synonyms most of the time. Not in this case though.


nice catch


How could you exclude "almost every real number", if such a thing should not even be considered?


You simply choose not to invent any of them.


You just leave the axiom of infinity out.


I would phrase the crux as follows:

Any physics theory is basically a map from one set of measurements (past) to another set of measurements (future). Any physical measurement can only have finite precision, due to fundamental constraints from physics and information theory. The set of finite precision numbers has vanishing measure in the set of all real numbers (since finite precision is a subset of rationals). So are real numbers really necessary?

As a physicist, while this question is definitely interesting, it’s not clear what will come out of this line of questioning (aka research).


I think you're suggesting that "real" numbers are an abstraction that is convenient to model with, but aren't actually fully representable in the universe, because of Planck and Heisenberg and such? I would agree with that view.


It's not even necessarily about quantum stuff. The point is that any practical measurement system (even classical ones) can only deal in finite precision numbers. And since that set is of vanishing measure on the real line, there isn't an obvious notion of continuity/smoothness (my knowledge here is limited). That can lead to some surprises.

For the more advanced readers: this result reminds me of how Haag's theorem shows (rigorously) that relativistic interacting QFTs can't exist (assuming operators sit at points indexed by real numbers), while in a Wilsonian sense QFTs are perfectly fine, because there's a UV cutoff, and all measurements have a finite resolution.


If we're going with the theory that things with a wavelength shorter than the Planck length don't exist then the sampling theorem should allow us to describe the universe with a discrete set of points.

General relativity complicates things (as usual) but the sampling theorem can be generalised to non-uniform samples so it should probably still work somehow.

In both cases this discrete theory is equivalent to a theory based on real numbers though, so in that sense there's not really any way to say which is better.


Well, computable numbers are countable, so almost all real numbers are non-computable (also they're dense in R).

So while your objection is valid, in practice any truly randomly selected real number is non-computable.


I guess I'm saying: such a basic mistake in terminology in the abstract doesn't bode well for the soundness of the paper in general. Being able to properly tell apart the terms "real number" and "non-computable number" is a fairly basic qualification.


A quick check on the author leads to https://en.wikipedia.org/wiki/Nicolas_Gisin. I think it's going to be rather safe to conclude that this author is well-aware of the difference between the computable and non-computable parts of R.


It's not a "qualification", it's just some using a slightly different set of definitions for convenience. Anyone "qualified" to read the paper understands the meaning.


To nitpick: to say that "in practice" you won't select a rational is understating the case. Being uncountable, the set of irrationals is infinitely larger. The chance of selecting a rational "randomly" is zero as far (heh) as zero has any practical meaning.


A computable number can be a normal number.

http://www.glyc.dc.uba.ar/santiago/papers/absnor.pdf


How does one go about truly randomly selecting a real number?


This sort of depends on whether you're talking about "randomly selecting a real number" in the real word or in the mathematically abstract world. In the real world, you can't do it, because almost all the real numbers are non-terminating decimals. Or to put it another way, any selection process is a computation, and almost all the real numbers are uncomputable and so can't be found by any computation.

To randomly generate a real number in the abstract, I suppose you could do a random walk of infinite steps on the real number line, with each successive step randomly going left or right and shrinking the distance walked. Off the top of my head I'm not sure if that's rigorous, but it sounds like it would work.


Simply write its binary expansion, by flipping coins, (heads=1, tails=0), forever.


Very carefully.


The question is whether the real numbers (as a set) are real, not whether each specific real number is a thing.

Transcendental — the numbers that make the real numbers distinct from the algebraic numbers — are not necessary for anything we do, and all the weirdest corner cases in math seem to come from treating them as a thing. sqrt(2) and 0.5 aren’t transcendental. 0.5 is a rational. Sqrt(2) is an algebraic. You can express them in physical terms (as roots of finite polynomials)

Most reals don’t need to exist for any purpose. All we need is some kind of vector space of constants (e.g. planck’s constant, electro-permeability, the speed of light, the mass of an electron) over the unbounded algebraics. It’s entirely possible that some of these constants would in fact be transcendental, but such a space is much much smaller than the reals. These are the numbers we can construct.


pi and e are very important numbers!


They also belong to the much smaller set of computable numbers. The set of all reals is almost entirely full of absolutely useless numbers- ones that can't be compressed or expressed by any algorithm or notation system with a finite amount of space.


I've never seen one of these supposedly useless numbers. Every one I've seen can be expressed. ;-)


Right, they might well be basis vectors.


> 0.5 is a real number. Its series of bits is not random at all

Would you say that an (arbitrarily long) sequence of coin tosses that all come up heads "is not random at all"? I'd say it's vanishingly unlikely to occur (if the coin is actually 50/50), but just as unlikely as any other particular sequence of the same length. It's like if a lottery gave out [1, 2, 3, 4, 5, 6]: it's a remarkable pattern, but no more or less likely than any other particular result. The same goes for the never-ending decimals of 0.5 and sqrt(2), and it's misleading to draw general conclusions from such specifically-chosen examples.


As of the end of April 2018, we've had 400 consecutive months where the world's temperature was hotter than average for that month, yet a whole lot of politician say that's just due to random variation.

I bet they could take a long run of heads in stride without thinking that maybe they have a biased coin.


> It's like if a lottery gave out [1, 2, 3, 4, 5, 6]: it's a remarkable pattern, but no more or less likely than any other particular result.

Do you know if there's a name for the fallacy you've alluded to? (Assuming that an ordered sequence is less likely than any other.)


Interestingly, the fallacy is actually good reasoning. Even though every permutation is equally likely, the permutations fall into equivalence classes of "notoriety", and the class of ordered sequences is far smaller (== less likely) than the class of disordered sequences (which are all equivalently noninteresting).

(This is entropy, as it applies to both information and the physical world.)


> 0.5 is a real number

Yes, and it is one of the few we have no problem with (7 is an other example of this class): The rationals (or the algebraic). Unfortunately the vast majority of the reals are not that catchy. E.g. we still do not know how many reals there are. The reals are a beast if watched from a fundamental perspective.


> E.g. we still do not know how many reals there are.

Umm... yeah we do, the cardinality of the continuum is the same as the cardinality of the power set of natural numbers. We've known this for 140 years.

Reading between the lines, I think the author is really talking about the "non-computable numbers" (i.e. those real numbers who can't be calculated to an arbitrary precision by any Turing machine), but if that's what the author is referring to, he should just say "non-computable numbers", not "real numbers", which is a much broader class.


You overstate "knowledge".

We project back currently accepted axioms, and find that Cantor's proof works. And therefore it was known, because the proof was known.

However Cantor's proof is not so cut and dry, nor was it so unarguable at the time.

It was based on set theory, and it was not clear to people at the time that set theory actually worked. Indeed, in 1901 Bertrand Russell came up with "the set of all sets that do not contain themselves" and came to a contradiction.

One of the proposed resolutions was to find a better set of axioms, which lead us to ZF and later to ZFC. This is the path that mathematics took.

Another was to question what words like "exists" and "truth" mean. In particular, does it make sense to talk about the existence of something we cannot construct? To talk about the truth of a statement that we have neither proof nor disproof of? This path leads to constructivism, and in constructivism Cantor's "proof" isn't a proof at all!

As it turns out, there are philosophical reasons to prefer constructivism, but mathematics is easier to do within formalism. After mathematicians gained enough experience with and trust for ZF, they went with convenience. But there are plenty of mathematicians historically, and even a few remaining today, who think that the entire tower of cardinalities from classical set theory is formal nonsense meaning nothing. And there is no logical flaw in their views.


Cantor's proofs (he gave multiple) are un-controversial. There's absolutely zero question about them from any reputable mathematician.

One COULD take issue with the wording: what Cantor demonstrated is that there is no injection from the reals to the naturals (i.e., no way to assign a natural number to every real with no repeats). Anyone who disputes this is a quack. The layman controversy comes from our choice to describe this situation in English as, "There are more reals than naturals". That's merely a shorthand for the more precise statement. People get bent out of shape because they mistake the shorthand itself as some deep philosophical claim, rather than looking at what it actually means.


You overstate your case.

The proof that there is a bijection between the reals and the power set of the natural numbers depends on Cantor's bijection theorem. As you can verify from https://en.wikipedia.org/wiki/Constructivism_(mathematics) or many other sources, that proof and theorem has been rejected by many reputable mathematicians over time. Most notably including Brouwer.


Constructivism is subtler than that (the wikipedia intro is misleading).

When an intuitionist says that we can't use the principle of excluded middle, they mean it more like in a functional programming sense: if we have two recipes for a cake, one of which requires a proof of X, and one of which requires a disproof of X, we cannot combine those recipes with a proof of "X or not X" and bake a cake.

Intuitionists noticed that (in a sense that can be formalized), if you do mathematics while "pretending" that the law of excluded middle is doubtful, then all your proofs become constructive. There is a misconception among laymen, who see these mathematicians who are so pretending for a pragmatic purpose, and mistakenly think these mathematicians are so pretending out of philosophical principles. That's never or almost never the case.

I can't speak for Brouwer's "religious" beliefs but what I can say is: if he attempted to teach students "It isn't always true that (P or not P)", without appropriate disclaimers that by saying that he's actually saying something very subtle and precise--then his math department would be obligated to stop him from misleading those students.


I have no idea what your background is, but my understanding is exactly opposite of yours. You are shoehorning people who think something very different from what you think into the framework of how you think people should think about it.

The first "pure existence proof" by contradiction was due to David Hilbert in 1888. The now-named Hilbert Basis Theorem resolved a famous problem introduced by Paul Gordan. Paul Gordan's famous response was, "Das ist nicht Mathematik. Das ist Theologie." (That is not Mathematics. That is Theology.)

In the debates that followed in subsequent decades, the question was whether or not we could sensibly talk about the existence of something we have no way to find or verify. Brouwer wasn't just making a subtle point in the 1920s that is palatable to modern mathematicians. He was rejecting the symbol game that was Hilbert's Formalism as meaningless nonsense.

Brouwer's school lost. So it is true that any Constructivist today does have to present a weakened form in public that fits your description. But historically they meant exactly what they said. And what they said is that proving that there is an unfindable contradiction in an infinite set is not an acceptable proof of existence. And what they believed was that allowing such unsound reasoning methods would lead to contradictions. (This belief has since been disproven, but in the 1920s nobody can be faulted for having been genuinely concerned about it.)

To gain a better understanding of the times, I would recommend two books. The first is Hilbert! by Constance Reid. It is a biography of David Hilbert, and does a surprisingly good job of describing the fundamental epistemological issues that lead him to Formalism. The other is The Mathematical Experience by Davis and Hersch.


I'm not a math historian, so I'll take your word for it about the math history. I should have qualified that my statements apply to contemporary mathematics.

Paradox: Cantor's theorem is uncontroversial in mathematics. But the controversialness of Cantor's theorem is uncontroversial in history of mathematics. :)

I, for one, am glad Brouwer's school was defeated: I wouldn't want to choose a mathematical denomination like people choose their church denomination.

EDIT: Thinking about it deeper, it does make me wonder if we haven't all already chosen a mathematics denomination, and just not realized it. It's fun to imagine an alternate reality where Brouwer won and Hilbertists are forced to couch their theorems with elaborate contortions about "when I say X exists I really mean that a Hilbert-style proof that X exists exists"...


We have indeed all already chosen a mathematics denomination. And people like me who don't think that it makes actual sense to talk about the "existence" of things that cannot be in any useful way described have lost.

That said it is worth understanding very clearly, no matter what your preferences, that the deciding factors in any debate between the two sides did NOT center on logic. Logically both positions are internally consistent. In the end it comes down to asking whether or not you wish mathematics to be convenient, or about something real. Convenience won.

Which is the same reason that ZFC beat out ZF. (Though choice is more commonly used in an alternate form such as Zorn's lemma.)


>Though choice is more commonly used in an alternate form such as Zorn's lemma

Speaking of choice & constructivism, did you know that there exists a constructive version of the axiom of choice? See here:

http://web.archive.org/web/20170222112602/http://www.jonmste...

[Historical side note of somewhat fleeting importance on something which, besides me, nobody so far seems to have really noticed:

It seems that van Heijenoort 'overtranslated' Hilbert's infamous boxing gloves paragraph. Here's the original German, from Die Grundlagen der Mathematik, Abhandlungen aus dem mathematischen Seminar der Hamburgischen Universität 6 (1928), 65-85:

"Dieses Tertium non datur dem Mathematiker zu nehmen, wäre etwa, wie wenn man dem Astronomen das Fernrohr oder dem Boxer den Gebrauch der Fäuste untersagen wollte."

Here, van Heijenoort's translation:

"Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists."

Here, a more literal translation:

"To take this Tertium non datur from the mathematician, would be about how when would want to proscribe the Astronomer the telescope or the boxer the usage of the fists."

Now, the point of this doesn't consist of the sentence now sounding rather different (and somewhat badly phrased) - in fact, that seems rather typical of intentionally more literal translations, as usually, translation also involves fitting the translated version of the text into the common use of the target language. No, the point consists of the fact that van Heijenoort should have left the latin expression "Tertium non datur" in place, see the following reddit comments thread about TND 🆚 LEM for why:

https://www.reddit.com/r/dependent_types/comments/33tc28/jon... ]


I noticed you consistently misspell Reuben Hersh's surname. Sorry I haven't corrected you when I first saw you doing that on HN close to a decade ago, assuming you might soon self-correct if only by glancing over the book you hold so dear.

As for the book by Constance Reid, Gian-Carlo Rota's review entitled Misreading the History of Mathematics is all that should be said of it.


I second the recommendation of Reid's biography (although the title does not have an exclamation point): great book about a truly great man. Eventually Gordan came around, to some extent, and said that maybe "even theology has its uses."


> in constructivism Cantor's "proof" isn't a proof at all!

Are you sure about that, and could you point me to a reference if so? It was my understanding that Cantor's proof works perfectly well in a constructive setting.

For example, we can represent real numbers in a constructive way as functions from a natural number to a digit, where we interpret f(N) as giving us the Nth decimal place. We can represent an infinite list in the same way: taking a natural number and giving the value at that list position (a real number is hence an infinite list of digits).

Hence we can construct a function F which takes in a list of real numbers (a function mapping natural numbers to functions-mapping-natural-numbers-to-digits) and gives out a real number (a function mapping natural numbers to digits). Given a number N, this function looks up the Nth digit of the Nth number in the list, and returns a different digit (e.g. one more, modulo 10). Formally, in some Agda-like notation, it would be something like:

    Digit = Member of {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

    inc : Digit -> Digit
    inc(0) = 1
    ...
    inc(9) = 0

    List(t) = Natural -> t

    Real = List(Digit)
    Real = Natural -> Digit

    F : List(Real) -> Real
    F : (Natural -> Real) -> Real
    F : (Natural -> (Natural -> Digit)) -> (Natural -> Digit)

    F(list)(n) = inc(list(n)(n))
Cantor's proof is then a function which takes a list L and returns a proof that F(L) doesn't occur in L. We can represent non-occurrence using a function taking a natural number N and returning a proof that the Nth element of L differs from F(L) (this is a list of proofs!). We can represent proofs that two numbers differ by using a natural number, which gives (one of) the decimal places at which those numbers have different digits. We know from the definition of F that F(L) will differ from the Nth value in the list, and more specifically that it will differ at the Nth decimal place. It's easy to prove that distinct digits are different, since the set of digits is finite (although the exact encoding depends on the logical system being used). Hence we just need to show that `inc(N) != N`, and use that as proof that `F(L)(N) != L(N)(N)`, and hence `F(L)` doesn't occur in `L`:

    -- Proof that x != y, however you want to represent that (e.g. 'Equal x y -> Empty')
    Differ(t) : (x : t) -> (y : t) -> Set of proofs that x != y

    incDiffers : (d : Digit) -> Differ(Digit)(d, inc(d))
    incDiffers(0) = 0 != 1
    ...
    incDiffers(9) = 9 != 0

    -- Proof that two reals differ, which is a natural and a proof that they differ
    -- at that digit
    RealsDiffer(x, y) = {n : Natural | Differ(Digit)(x(n), y(n))}

    -- Proof that F(L) differs from everything in L. This is because the Nth element of L
    -- differs from F(L) at (at least) the Nth decimal place.
    cantor : (L : List(Real)) -> (n : Natural) -> RealsDiffer(L(n), F(L))
    cantor(L)(n) = (n, incDiffers(L(n)(n)))


The diagonalization argument is constructive, but with an asterisk. However the classic understanding of cardinality also requires Cantor's bijection theorem. If there is a one-to-one function from A into B and from B into A, then there is a bijection between them. This proof is non-constructive. And without it, the entire tower of cardinalities falls apart.

The asterisk with the diagonalization argument can be seen best with an example. Suppose that we define "reals" as programs which have been proven to construct Cauchy sequences that converge at a given rate from some set of axioms. Suppose that we construct a program that will enumerate all possible proofs from that set of axioms. This is doable. We search those proofs for proofs of statements of the form "this computer program computes a Cauchy sequence". That gives us a purported listing of all possible real numbers according to this definition. (Any given real will show up many times.)

Now apply Cantor's proof. It can construct a program which we believe produces a Cauchy sequence which converts at the given rate which is not in the list of all possible real numbers. The construction is concrete and we can write down the resulting program with only modest difficulty. BUT is the program that we found a real number under the definition that we chose?

It cannot be if the set of axioms is consistent. Because if there was a proof that it was, then it would be on the list and we'd be looking for a number that we don't want. Indeed, a proof that it actually produces a Cauchy sequence would follow immediately from the assertion that the axiom system we were using is consistent. But Gödel's theorem says that the axiom system, if it is consistent, can't ever prove that. The program that we have produced is some sort of "meta-real" - but it is not a "real" by the definition chosen since there is no proof from the chosen axiom system that it actually produces a Cauchy sequence.

And now Cantor's argument is not showing up as "there are more reals than integers". It is showing up as some sort of, "Our definition of reals is complex and recurses in on itself in a way that tangles up the division between true, false, and unknown."

See https://mathoverflow.net/questions/30643/are-real-numbers-co... for more variations on the complexity of constructivism and Cantor's diagonalization argument.


I don't think this construction works, for the same reason that rationals are lower cardinality than reals. I e, I think your program proves the rationals, not the reals.


They would be rationals if the "lists" were finite (i.e. if their input were finite, like "natural numbers less than 100"). Infinite lists of digits (i.e. where the input can be any natural number) are reals.

One subtlety is that we're talking about arbitrary functions, e.g. they aren't necessarily computable, or even representable (e.g. as programs, or otherwise).


Presumably drvd is referring to the independence of the Continuum Hypothesis: we don't know |R| in the sense that we don't know for which ordinal alpha is |R| the alpha'th infinite cardinal. The Continuum Hypothesis is the claim that |R| is the first cardinal after |N|. It's well known the Continuum Hypothesis is independent of ZFC. That means for the question of "how many reals there are" (at least if that question is read in the formal sense), the answer is "it depends: for different models of ZFC, there are different answers".


What is the cardinality of the powerset of the naturals?


It's usually referred to as "c". I'm not sure what you're asking? It's like saying "3 is the number that follows 2", and then asking "yeah, but what really IS the number 3, man?". The answer is: it's the cardinality of the sets which can be in a bijection with the power set of the natural numbers.

If you're referring to the continuum hypothesis, that is a distinct question from "what is the cardinality of the real numbers". Also: a solved one, the contiuum hypothesis is independent of ZFC, so saying "we don't even know what the cardinality is" is still wrong.


This is an extremely unusual conception of epistemology, even for mathematics. This knowledge that is contingent on axioms isn't at all convincing. At the beginning of the 20th century, this was a raging debate. It wasn't quite resolved, it just didn't quite matter to practicing mathematicians so it kind of faded into the background.

There is a massive gap between 3 and the cardinality of the continuum. 3 is directly examinable. If I count some collection of objects, my fingers say, I will immediately grasp 3.

On the other hand, the set of real numbers is a highly pathological, abstract concept. Everything we know about the reals suffers from two severe deficiencies: One, it depends on infinitary axioms which presuppose facts about the phenomenon we would like to investigate. Two, even what we can infer from such axioms is always indirect evidence. That's not surprising: All reasoning about infinity is indirect.

In certain mathematical settings it is even true that the Cauchy sequence definition of real numbers and the Dedekind cut definition do not agree. At the very least, the reals isn't even a thing: there are many reals. That you say that "we know the cardinality of the reals" because we know CH is independent of ZFC is... preposterous to say the least. "If you assume you know the cardinality of the reals, then you know the cardinality of the reals; therefore we know the cardinality of the reals" is basically what such axiomatic acrobatics boils down to. Axioms are not knowledge.


Here, another interesting construction of the real numbers:

https://arxiv.org/abs/math/0405454 - The Eudoxus Real Numbers

https://ncatlab.org/nlab/show/Eudoxus+real+number

https://en.wikipedia.org/wiki/Construction_of_the_real_numbe...

(Surprisingly enough, ncat & Wikipedia complement each other here in their explanations of it.)


Getting back to the OP: in a very real sense, the set of Real numbers does not exist. Therefore, it is impossible to know its size via experience. The only possible way is to derive it from chosen axioms.

Note that even your conception of 3 is reliant on axioms. How do you "know" that 3 ducks is the same 3 as 3 fingers? Only via axioms.


That's sounds a little absurd. A five year old child knows that 3 fingers is the same 3 as 3 ducks. Certainly 5000 years ago people understood 3 without ever knowing what an axiom is. The peano axioms didn't create arithmetic, arithmetic created the peano axioms. S(S(S(0))) follows | | |, not the other way around.


The cardinality of the continuum is a cardinal number. It’s one of the alephs. It is not known which aleph it is. So it’s not known what the cardinality of the powerset of the naturals is. It’s just known that it is the same cardinal number of the reals. Basically, we have two jars of marbles that contain the same number of marbles but it’s not known how many marbles that is.


The continuum hypothesis being independent just means that it's an additional rule you can add or remove from the game you are trying to play. It doesn't mean we are lacking in knowledge and that if we were to work harder we would solve this problem. We do know which aleph c is: it's aleph_1 with CH and some other aleph without CH. Just take your pick which version you like better.

It's not like we don't know which one is the true model of military combat: chess or checkers. They're just two different games with two different rule sets, and you get to pick which one you like to play more.


The set theory that most working mathematicians deal with is ZFC. In ZFC it is not known what cardinal the continuum is. Hence the statement that I was responding to is incorrect. The person I responded to said that they do know how many reals there are.

The cardinality of the reals is called c. It is known to the be the same as the cardinality of the power set of the naturals. It is not known, in ZFC, which aleph this is. We just know that it is the same as the size of another set.

If you want to add an axiom and say that c is aleph1 then you are free to do so. But if you don't have this axiom then you don't know which aleph it is. So in what sense can you say that you know how many reals there are? You only know it if you add an axiom that says, "It is aleph1."

If I have a jar of pennies and I know it has the same number of pennies as the number of quarters in another jar that I have, does this mean I know how many pennies are in the jar?


Exactly right.


I think the issue is with the implicit claim that we don't "know" a cardinal until we know which aleph it corresponds to.


> What is the cardinality of the powerset of the naturals?

That question is better than almost every other question!


The complement of the set of normal numbers has measure zero. A normal number is what the author of the paper is talking about when referring to random numbers. I think most mathematicians, if they had to bet, would bet that sqrt(2) is normal. It is not known though. If it is normal then it’s digits are random.

I haven’t read the paper but I think the author is arguing that most real numbers don’t make sense physically. If sqrt(2) were shown to be normal and since it’s the hypotenuse of a right triangle of legs with length 1, I wonder what the author's response to this would be. Perhaps he’d say that such a triangle doesn’t exist physically.


Perhaps he’d say that such a triangle doesn’t exist physically.

I am certain that this is the case given that he's referring to the maximum information density density of space as one of the reasons why exact numbers make no physical sense. See https://en.wikipedia.org/wiki/Bekenstein_bound if you don't know about that limit.

More directly, the inability to represent exact lengths falls out of the Heisenberg Uncertainty principle. As an abstract mathematical concept, numbers could be anything. As a physically relevant concept, there are limits to how much precision can matter.


What does it mean to produce a physically realized triangle with length exactly 1? Some specific number of atoms in a specific place? How does that square with the uncertainty principle?


Pretty much everything in mathematics was named on discovery, which was usually before they understood what the concept they were discovering actually was. And, somethings even the name that was given is meaningless in English because it was given in a foreign language. Real Numbers aren't somehow special here.

There are a lot of elements in mathematics that arguably could be named better. Examples include (and obviously this is opinion, not a mathematician, in all humility, etc):

+ Negative numbers, real numbers, imaginary numbers, quaternion, etc (which should really all be named in a more systematic way) + sin, cos, tan (I doubt any link remains between the function and the concept in modern teaching) + Anything named after a person instead of named for what it is (eg, Euler's constant e; should have a more informative name) + plus, multiply, exponentiation, Knuth's up-arrow (a progression of the same concept, should be named systematically) + The assuredly long list of examples I don't know about

The gains don't justify the costs, but one suspects if you could somehow give an alien modern math theory without any of the name or symbol conventions they would invent a much more systemic system than hat we have grown.


You are talking about rationals and algebraic numbers, which are just a tiny subset of the real numbers. Your point about computable vs non-computable numbers is valid, but I think the author meant that you will never encounter a real number that is non-computable, since those are impossible to express in the first place. Real numbers can contain an infinite amount of information.


Actually there are non-computable numbers known that you can encounter. See https://en.wikipedia.org/wiki/Chaitin%27s_constant.


Yes, there's actually another distinction we can make: between incomputable numbers and indescribable numbers. The latter is very slippery, since it can lead to things like the Berry paradox https://en.wikipedia.org/wiki/Berry_paradox

For example, we would need some way to avoid expressions like "the smallest indescribable number", since that itself is a description ;)


A good book to read to get aquainted with these notions is "Meta Math" by Gregory Chaitin. The impatient can jump to chapter 5.

https://arxiv.org/abs/math/0404335


So, I think the author meant that a real number is actually a sequence of approximations towards an ideal. So, 0.5 has infinite sequences like 5, 3, 2, 1, 3/4, 1/4, 3/5, 2/5, 1/2


Although technically you're right, one can reasonably assume that when talking about random numbers, people mean non-rational random numbers. Like sqrt(2), e, pi, ...


But none of those are really random either. The only sensible definition of randomness here is Kolmogorov complexity, and all of those numbers you just mentioned have incredibly low Kolmogorov complexity. I can write a tiny computer program that prints them out to billions of digits.

What the author is probably referring to is "non-computable numbers", which are numbers that cannot be computed to arbitrary precision by ANY Turing machine. But it's incredibly confusing and wrong to use the term "real numbers" to mean "non-computable numbers".


Okay, but 2 points:

1) Kolmogorov complexity is a measure of the computational resources needed to specify the object

Depending on what you mean by "specify the object" this may or may not be infinite resources. My vote goes to infinite. Because you either need infinite resources even to just express this number, or you need infinite resources to actually run it. I get that you can create a finite program that gives an approximation of the number, but that's not the same, and only approximations can be finite. The number itself, and it's useful specifications, are all infinite. For the real number you need infinite resources to express it.

So by what measure exactly does sqrt(2) have finite complexity ?

In fact, you could use the old "every number occurs in Pi" trick and say that every other object is contained within it (and within every other real number), and since kolmogorov complexity is a metric I have just proven that the kolmogorov complexity of sqrt(2) is larger than any other kolmogorov complexity. The only number that satisfies that condition is infinite.

2) Even if this applies to the 4 numbers I specified, or the few thousand "well known" real numbers, it doesn't apply to any real proportion of real numbers. I mean, I can compute one or two 3rd degree boundary differential equations straight in my head, but that is very different from saying I can do that in the general case. Hell, quantum computers can't do it in the general case. The same is true for specifying real numbers.

(and, more or less)

3) in reality, only approximations of sqrt(2) are computable. The real number itself isn't.


A normal number can be computable. It is not known if sqrt(2) is normal.


At no point was anybody talking about normal numbers. "Normal numbers" doesn't mean "numbers with random digits", it has a much narrower meaning that. There are non-computable numbers which are normal, there are non-computable numbers which aren't. There are computable numbers that are normal that don't look random in the least: https://en.wikipedia.org/wiki/Champernowne_constant


The author's paper is about normal numbers (numbers that contain infinite amount of information). If a normal number isn't considered to have random digits then what notion would you use?


You've misunderstood the definition of normal numbers. A "normal number" (loosely defined) is a number where the digit expansion of the number has all possible substrings of digits uniformly distributed in the limit as it goes to infinity.

Several things to note:

1. Whether or not a number is normal has nothing to do with the "randomness" of its digits or the "amount of information" in the digits. Chapernowne's constant, 0.123456789101112... isn't "random" at all and contains very little information, yet it is known to be normal in base 10.

2. A number can have totally super-random digits (as random as you want! it can even be non-computable! infinite information!) and not be normal at all. For instance, imagine you have a normal number that's as random as you want, and it starts like:

0.892345123402345671235....

And then goes on forever, no discernible pattern. Say you construct a new number from this number, with the only difference being that you remove the digit 7. Literally, every place that a 7 appears in your original number, you just remove it. This number would no longer be a normal number, because all the sequences with the number 7 in it would appear nowhere.

The number would still have "infinite information" in the sense of the author of this paper. It would still be "just as random", it would still have "no pattern". But it would not be a normal number anymore.

Whether or not a number is "normal" or not has nothing to do with the issues raised in this paper. "Normality" is a different criterion entirely. When the author talks about numbers with "infinite information", he's not talking about normal numbers, he's talking about computable numbers, which is an entirely different concept: https://en.wikipedia.org/wiki/Computable_number


> Chapernowne's constant, 0.123456789101112... isn't "random" at all and contains very little information, yet it is known to be normal in base 10.

Why is Chapernowne's constant not random?

> The number would still have "infinite information" in the sense of the author of this paper. It would still be "just as random", it would still have "no pattern". But it would not be a normal number anymore.

That's not true. It would not be just as random. If the set you're sampling from includes a 7 (i.e. the set of digits which can be represented in any single place in the sequence), and you never see a 7 for an extremely long time, this is exceptionally good heuristic evidence that the number is not random. And if we know 7 never shows up, we also know that the number is not random, because we know it's not uniformly sampling from the set of base 10 digits.


I read through most of the paper we are all nominally talking about. He doesn't use normal numbers as you say. He is talking about computable numbers as you say. My apologies.

But a normal number has "random" digits in the sense that all finite sequences of digits (in a given base) occurs uniformly in the expansion. What other notion of random can one meaningfully give for an expansion of a number's digits? Without getting into too much philosophy.

From [1]: We call a real number b-normal if, qualitatively speaking, its base-b digits are “truly random.

My expertise is in commutative algebra so I'm outside of my comfort zone.

[1]: https://www.davidhbailey.com/dhbpapers/bcnormal.pdf


Something can be random and not have a uniform distribution. If I throw two dice, the sum will be random, but it will not be uniformly distributed. Uniform is just one distribution among meny.

The only definition of "randomness" in a sequence that holds any kind of water, philosophically or mathematically, is Kolmogorov complexity [0] (see specifically the section on "Kolmogorov randomness"). I don't know where you got the idea that "normalness" is the ultimate version of randomness, but it's not.

Forget the formal definition for a second, and just think of the intuitive notion of randomness for a second: does the sequence 12345678.... look random to you? In random sequences there should be no patterns: do you see a pattern in this sequence? If you had a computer program with a random number generator that produced that sequence of digits, would you be happy with it? No, you wouldn't.

In the sense of Kolmogorov randomness, the sequence 1234567.... is obviously not random at all, since it's trivial to find a Turing machine to generate it. It matches up perfectly with out intuitive notion of what randomness is, and it quite correctly points out that the amount of information in the string is very low, even though it's infinitely long. That is my definition of randomness, and it's (more or less) the author of this paper's definition.

[0]: https://en.wikipedia.org/wiki/Kolmogorov_complexity


mathematicians like to skip steps, generalize and the like. 'randomness' for numbers is not really a well defined mathematical property. what i like about the summary is that you can understand the gist of what he is trying to do and why.


> 'randomness' for numbers is not really a well defined mathematical property.

Randomness is a well-defined property. We say that numbers are random, or "normal", if an infinite sequence of their digits forms a uniform distribution. The reason we call them "normal" and not "random" is because they are the norm - almost all real numbers are normal.


If we could read the coordinates of any physical object to infinite precision, we could only influence the higher digits by moving the object very carefully, beyond the digits which we have some influence over, the infinite tail of the real numbers digits must be completely random.


I don’t know 0.5 still could be random. Relevant XKCD https://www.xkcd.com/221/. </sarcasm>




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: