Hacker News new | past | comments | ask | show | jobs | submit | davidgrenier's comments login

There's only one "other side" in this, it's the American people.


v4l2-ctl on linux allows me to change such settings on a global level, maybe that might work if a version can be found on his OS.


I think the author is generous in granting that Caro-Cult programming works.


I haven't looked at Hardy's but the presentation in Spivak is also Dedekind cuts. Perhaps Hardy uses a different approach and OP misnamed it? Rudin's chapter 1 annex also use Dedekind's cuts.


It looks like Hardy used Dedekind cuts from starting with the second edition (1914), but not in the first edition (1908).

What's the advantage of Dedekind cuts over say equivalence classes of Cauchy sequences of rational numbers? Particularly if you start out by introducing the integers and rational numbers as equivalence classes as well.


Cauchy sequences can be made constructive (providing a nice foundation for numerical analysis); Dedekind cuts cannot.


The equivalence class of Cauchy sequences is vastly larger and misleading compared to those of integers and rational numbers. You can take any finite sequence and prepend it to a Cauchy sequence and it will represent the same real number. For example, a sequence of 0,0,0,...,0 where the number of dots is the count of all the atoms in the universe and then followed by the decimal approximations of pi: 3, 3.1, 3.14, 3.141, ... The key component is the error clause of getting close, but that can vary greatly from sequence to sequence as to when that happens. The cute idea of being able to look at a sequence and see roughly where it is converging just is not captured well in the reality of the equivalence classes.

More or less, one can think of a Cauchy sequence of generating intervals that contain the real number, but it can be arbitrarily long before the sequence gets to "small" intervals. So comparing two Cauchy sequences could be quite difficult. Contrast that with the rational numbers where a/b ~ c/d if and only if ad = bc. This is a relatively simple thing to check if a, b, c, and d are comfortably within the realm of human computation.

Dedekind cuts avoid this as there is just one object and it is assumed to be completed. This is unrealistic in general though the n-roots are wonderful examples to think it is all okay and explicit. But if one considers e, it becomes clear that one has to do an approximation to get bounds on what is in the lower cut. The (lower) Dedekind cut can be thought of as being the set of lower endpoints of intervals that contain the real number.

My preference is to define real numbers as the set of inclusive rational intervals that contain the real number. That is a bit circular, of course, so one has to come up with properties that say when a set of intervals satisfies being a real number. The key property is based on the idea behind the intermediate value theorem, namely, given an interval containing the real number, any number in the interval divides the interval in two pieces, one which is in the set and the other is not (if the number chosen "is" the real number, then both pieces are in the set).

There is a version of this idea which is theoretically complete and uses Dedekind cuts to establish its correctness[1] and there is a version of this idea which uses what I call oracles that gets into the practical messiness of not being able to fully present a real number in practice[2].

1: https://github.com/jostylr/Reals-as-Oracles/blob/main/articl... 2: https://github.com/jostylr/Reals-as-Oracles/blob/main/articl...


> The equivalence class of Cauchy sequences is vastly larger and misleading compared to those of integers and rational numbers. You can take any finite sequence and prepend it to a Cauchy sequence and it will represent the same real number. ...

This can be addressed practically enough by introducing the notion of a 'modulus of convergence'.


> The equivalence class of Cauchy sequences is vastly larger and misleading compared to those of integers and rational numbers. You can take any finite sequence and prepend it to a Cauchy sequence and it will represent the same real number.

What's the misleading part of this supposed to be?


I’m not who you replied to, but:

The equivalence classes of integers: pairs of naturals with (a, b) ~ (c, d) := (a + d) = (b + c).

The equivalence classes of rationals: pairs of integers with (a, b) ~ (c, d) := ad = bc.

It’s “easy” to tell whether two integers/rationals are equivalent, because the equivalence rule only requires you to determine whether one pair is a translation/multiple resp. of the other (proof is left to the reader).

Cauchy sequences, on the other hand, require you to consider the limit of an infinite sequence; as the GP points out, two sequences with the same limit may differ by an arbitrarily large prefix, which makes them “hard” to compare.

We can formalise this notion by pointing out that equality of integers and rationals is decidable, whereas equality of Cauchy reals is not. On the other hand, equality of Dedekind reals isn’t decidable either, so it’s not that Cauchy reals are necessarily easier than Dedekind reals, but more that they might lull one into a false sense of security because one might naively believe that it’s easy to tell if two sequences have the same limit.


It is easy if you know the limits; if you don't, it's still true that two sequences {r_n}, {s_n} have the same limit if and only if the limit of the difference sequence {r_n - s_n} is zero, which conveniently enough is an integer and can't mess up our attempt to define the reals without invoking the reals.

That won't help you much if you don't know what you're working with, but the same is true of rationals.

I'm missing something as to this:

> equality of Dedekind reals isn’t decidable either

Two Dedekind reals (A, B) and (A', B') are equal if and only if they have identical representations. [Which is to say, A = A' and B = B'.] This is about as simple as equality gets, and is the normal rule of equality for ordered pairs. Can you elaborate on how you're thinking about decidability?


> Two Dedekind reals (A, B) and (A', B') are equal if and only if they have identical representations. […] Can you elaborate on how you're thinking about decidability?

Direct:

Make one of the sets uncomputable, at which point the equality of the sets cannot be decided. This happens when the real defined by the Dedekind cut is itself uncomputable. BB(764) is an integer (!) that I know is uncomputable off the top of my head. The same idea (defining an object in terms of some halting property) is used in the next proof.

Via undecidability of Cauchy reals:

Equality of Cauchy reals is also undecidable. The proof is by negation: consider a procedure that decides whether a real is equal to zero; consider a sequence (a_n) with a_n = 1 if Turing machine A halts within n steps on all inputs, 0 otherwise; this is clearly Cauchy, but if we can decide whether it’s equal to 0, then we can decide HALT.

Cauchy reals and Dedekind reals are isomorphic, so equality of Dedekinds must also be undecidable.

Hopefully those two sketches show what I mean by decidable; caveat that I’m not infallible and haven’t been in academia for a while, so some/all of this may be wrong!


> BB(764) is an integer (!) that I know is uncomputable

I meant BB(748) apparently.

To elaborate on this point a bit, I specifically mean uncomputable in ZFC. There may be other foundations in which it is computable, but we can just find another n for which BB(n) is uncomputable in that framework since BB is an uncomputable function.


Your method for deciding whether two rationals are or aren't equal relies on having representations of those rationals. If you don't have those, it doesn't matter that there's an efficient test of equality when you do.

But you're arguing that equality of Dedekind reals is undecidable based on a problem that occurs when you define a particular "Dedekind real" only by reference to some property that it has. If you had a representation of the values as Dedekind reals, it would be trivial to determine whether they were or weren't equal. You're holding them to a different standard than you're using for the integers and rationals. Why?

Let's decide a question about the integers. Is BB(800) equal to BB(801)?

It sure seems like it isn't. How sure are you?


The important point for me is that the equivalence for Cauchy sequences are part of the definition of real numbers as Cauchy sequences. This ought to imply that one has to be able to decide equivalence of two sequences for the definition to make sense. For Dedekind cuts, the crucial aspect is being able to define the set and that is something that can be called into question. But if that is done, it is just a computational question in comparing two Dedekind cuts, not a definitional one.


The intuition of a sequence is that the terms get closer to the convergence point. Looking at the first trillion elements of a sequence feels like it ought to give one some kind of information about the number. But without the convergence information, those first trillion elements of the sequence can be wholly useless and simply randomly chosen rational numbers. This is an "of course", but when talking about defining a real number with these sequences, as opposed to approximating them, this gives me a great deal of unease.

In particular, it is quite possible to prove a theorem that a sequence is Cauchy, but that there is no way to explicitly figure out N for a given epsilon. The sequence is effectively useless. This presumably is possible, and common, with using the Axiom of Choice. One can even imagine an algorithm for such a sequence that produces numbers and eventually converges, but the convergence is not knowable. Again, if this is just approximating something, then we can simply say it is a useless approximation scheme. But defining real numbers as the equivalence class of Cauchy sequences suggests taking such a sequence seriously in some sense and is the answer.

In contrast, consider integer and rational number versions, it is quite immediate how to reduce them to their canonical form, assuming unlimited finite arithmetic ability. For example, 200/300 ~ 2/3 and one recognizes that 200/300 and 2/3 are different forms of what we take to be the same object for most of our purposes. There is no canonical Cauchy sequence to reduce to and concluding two sequences are equivalent could take a potentially infinite number of computations /comparisons. While that is somewhat inherent to the complexity of real numbers, it feels particularly acute when it is something that must be done in defining the object.

Dedekind cuts have the opposite problem. There is only one of them for an irrational number, but it is not entirely clear what we would be computing out as an approximation, particularly if the lower cut viewpoint is adopted.

Intervals, on the other hand, inherently contain the approximation information. By dividing them and picking out the next subinterval, one also has a method to computing out a sequence of ever better approximations. I suppose one could prove the existence of the family of containment intervals without explicitly being able to produce them, but at least the emptiness of the statement would be quite clear (nothing is produced) in contrast to the sequences that could produce essentially meaningless numbers for an arbitrarily large number of terms.


What you are referring to is also called the Principle of Nested Intervals: https://en.wikipedia.org/wiki/Nested_intervals#The_construct...


Where we define the real numbers as the least upper bounds of special sets. There is a bijection between these sets and the set of real numbers which we commonly think of and that bijection is the least upper bound of such sets.


Your very last example kinda supports the thesis up there, considering how it's been going.


Notoriously, the 737 Max bug that killed a bunch of people was a software patch over a serious mechanical design flaw, and Boeing decided it was easier to make the software engineers fix it than redesign the entire plane.


Incredibly well, statistically?


Correct me if I'm wrong but (log n)^O(n) sounds like atrocious complexity?


It is atrocious. It's worse than exponential.

But it's much better than the prior state of the art which was n^n 2^O(n). [1]

The Quanta article, unfortunately, doesn't bother to report the prior complexity for comparison, despite that that's probably the single most important thing to say in order to support the claim in the article's sub-headline.

[1] https://en.m.wikipedia.org/wiki/Integer_programming#Exact_al...


LP or ILP? There is a significant difference since for non-discrete problem Linear Programming is shockingly efficient and in no way can be considered a brute force technique.

edit: What would be a technique you consider non-brute force in discrete problems?


I was speculating two oscillators with periods p and q could be composed (as long as there was no way for them to interact) to create an oscillator of period p*q/gcd(p,q) but wondering why large primes wouldn't be a problem.

I guess this is my answer.


I think his argument was restricted to a human-produced mathematical result being ported to a Lean program where one would be just as likely to commit a mistake. However I disagree as well, I recall the difficulty of expressing what I wanted to Coq being a barrier to expressing it incorrectly.


The thing about proof checkers is that they won't accept invalid proofs (assuming their internal axioms are consistents...), so if you make a mistake there it will simply reject your proof. The only place where it won't catch some mistakes is in proposition you want to prove, because the only way for it to know what you want to prove if for you to tell it, but that's much easier to humanly verify.


Having worked with a proof assistant is what separates you from Granville (the interviewee in the article). If he had formalized at least a couple of proofs he would not have written things like "people who convert the proof into inputs for Lean". One does not "convert the proof into inputs for Lean" for a simple reason that a proof written in a formal proof language usually contains much more information than the informal prototype. At best one can try to express similar ideas, but it is far from converting a proof from one form to another. If he had formalized a couple of hundred he would have gained a different opinion on the quality of informal proofs as well. A nice list of typical mistakes can be found in this [answer](https://mathoverflow.net/a/291351/163434) to a MathOverflow question.


It would be interesting to see some computer-aided proof expert write a possible counter-response to the article.


Yeah it's likely he hasn't worked much with Lean and may have some misconceptions around it.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: