Hacker News new | past | comments | ask | show | jobs | submit login
Revolution in Mathematics: What Happened 100 Years Ago and Why It Matters [pdf] (ams.org)
112 points by tokenadult on March 10, 2013 | hide | past | favorite | 69 comments



This revolution in mathematics literally led to the invention of computers. I have been reading all about it when I go to the bookstore in the book Turing's Cathedral[1].

David Hilbert wanted to redo mathematics from square one, basing everything on set theory, axioms, and rigorous proofs. In 1900, Hilbert gave his famous address where he stated the Hilbert Problems. One of these was to prove that mathematics is "complete" in the sense that every possible statement is either provable or disprovable from the axioms.

In an effort to solve this problem, Alan Turing wrote his famous paper "On Computable Numbers" where he conceived for the first time his "Turing Machines". Although his approach didn't work to solve the Hilbert problem, it greatly inspired Von Nueman who at the time was pretty much working on pure mathematics.

Godel came along and proved that mathematics is not "complete", because there are some statements about natural numbers which cannot be proved or disproved from the axioms, but rather are undecidable.

Von Nueman was so upset by Godel's result that he decided he no longer wanted to work in pure mathematics, because if it's not complete then whats the point anyway. Instead he decided to work on applied math and physics. Since he loved the work of Turing, he wanted to build a physical Turing Machine to help out with computations, and he figured there would probably be a lot of other uses for such a machine that no one had thought of yet, and that would only become apparent once the machine was built.

While at the Institute for Advanced Study, Von Nueman fought a hard political fight to secure funding and permission to build the first computers, the MANIAC and the ENIAC, to the horror of the faculty at the time, including Einstein.

Since I am posting this at 4:30am in a thread that is 12 hours old, I will be happy if someone reads it :P

[1] http://books.google.com/books/about/Turing_s_Cathedral.html?...


This is interesting but not completely accurate. What is most mind-blowing, is that Turing machines were Turings approach to formalizing proofs as a means towards solving the Entscheidungsproblem. In fact he was successful in "solving" the problem, in that he proved the impossibility of solving it, later than Goedel, but in a much clearer way - Goedel encoded theorems using numbers in a quite crazy way. I also don't think Turing and Von Neumann can be given that much exclusive credit in inventing the computer, while they had huge impact, I think the idea of a computer was to some extent "in the air" and there were many people working on it concurrently, sometimes unaware of each other, see:

http://en.wikipedia.org/wiki/History_of_computing_hardware


It seems like the invention of the computer is one of the most controversial "who got there first" things in all of science.

Turing's Cathedral has a ton of really interesting historical details about the history of computers, and LOTS of details about the political fight Von Nueman waged to even be allowed to work on the first computers at IAS. I didn't realize how big a role he actually played, and I would encourage people to read that book because its pretty good.


> In 1900, Hilbert gave his famous address where he stated the Hilbert Problems. One of these was to prove that mathematics is "consistent" in the sense that every possible statement is either provable or disprovable from the axioms.

> In an effort to solve this problem, Alan Turing wrote his famous paper "On Computable Numbers" where he conceived for the first time his "Turing Machines".

Whoa -- you completely skipped Gödel's Incompleteness Theorems. Gödel's Theorems answered Hilbert's inquiry, not Turing. It's true that the Turing halting problem and Gödel's Theorems are deeply connected, but the historical sequence is Hilbert, Gödel, then Turing. You can't leave Gödel out.

http://en.wikipedia.org/wiki/G%C3%B6dels_incompleteness_theo...


I wouldn't say I left Godel out, but I suppose I got the historical order wrong, because Godel was 1931 and Turing was 1936, I didn't realize that, oops!


Only because you have repeated your spelling of "Von Nueman" three times, I want to point out that it is in fact spelled "von Neumann".


The Revolution:

At the turn of the 1900s, the field of mathematics was changing. The old school relied of physical intuition as a means of proof. The new school found that an adherence to formal logic led to more reliable findings.

Most Relevent Sections:

In brief, traditionalists lost the battle in the professional community but won in education. The failure of “new math” in the 1960s and 70s is taken as further confirmation that modern mathematics is unsuitable for children. This was hardly a fair test of the methodology because it was very poorly conceived, and many traditionalists were determined that it would succeed only over their dead bodies. However, the experience reinforced preexisting antagonism, and opposition is now a deeply embedded article of faith.

Many scientists and engineers depend on mathematics, but its reliability makes it transparent rather than appreciated, and they often dismiss core mathematics as meaningless formalism and obsessive-compulsive about details. This is a cultural attitude that reflects feelings of power in their domains and world views that include little else, but it is encouraged by the opposition in elementary education and philosophy.

In fact, hostility to mathematics is endemic in our culture. Imagine a conversation:

  A: What do you do?
  B: I am a ———.
  A: Oh, I hate that.
Ideally this response would be limited to such occupations as “serial killer”, “child pornographer”, and maybe “politician”, but “mathematician” seems to work. It is common enough that many of us are reluctant to identify ourselves as mathematicians. Paul Halmos is said to have told outsiders that he was in “roofing and siding”!

from page 33-34.

Core methods such as completely precise definitions (via axioms) and careful logical arguments are well known, but many educators, philosophers, physicists, engineers, and many applied mathematicians reject them as not really necessary.

from page 34

Why It Matters:

The sciences are at risk in the reckless implementation of maths.

Education methods are weakened by the philosophical divide.


Education methods are weakened by the philosophical divide.

Philosophically, aren't divides (ie. the presence and consideration of multiple perspectives or approaches) exactly what real (ie. non domain-linked) education needs?


divides, multiple, real

I have nothing to add.


Part of it is that exact mathematics run afoul of several atheist articles of faith. Depending on your interpretation it either disproves the "Jacobian" scientific model, or more flexibly interpreted it means science is at best a subset of what governs the universe (assuming -yes, assuming- it isn't entirely wrong). Godel's theorem runs afoul of the principle that science can explain everything. It answers the question whether mathematics directs and/or predicts the events in our universe : it doesn't, it can't. It's too "weak" a theory.

We're at a very exiting point in time where we know basic mathematics to be wrong. It predicts a great many things, but there's problems that make it thoroughly unsatisfactory. In a way it's like physics in the beginning of the 20th century, with the black body radiation problem. Of course, we have been in this less-than-satisfactory state for ~70-80 years now, and no Einstein in sight ...

Godel's theorem means that there's an infinite set of empirical truths (ie. simple experiments you can try out with marbles and bags) that are completely unexplained by mathematics - and thus by every science built on top of it.

Worse : this is not a fixable problem. Sure we can fix it for specific problems. Wherever we see an obvious leak (say the birthday problem, or large cardinal number problems) it can be plugged with a new well-chosen (or -more often- ill-chosen, like choice) axiom, but there's infinitely many leaks and the proof means that there's no plug that will stop any significant number of them.

So right now in the set of all empirically observable events, there's a set that's explainable by science, and there's a set unexplained by science. The unexplained set is at least as large as the explainable set (and keep in mind that's because both sets have been proven to have a cardinality of at least the largest known cardinal number, given the actual definitions of those 2 sets I'd say the unexplained set is going to turn out to be bigger).


Interesting points, other than mathematics weakening straw-man atheism.


Don't you think that the idea that science explains everything is at the very core of atheism ? Do you think my interpretation of Godel as directly contradicting this thesis is wrong ?


Courant said much more much better in much less words:

There seems to be a great danger in the prevailing overemphasis on the deductive-postulational character of mathematics. True, the element of constructive invention, of directing and motivating intuition, is apt to elude a simple philosophical formulation; but it remains the core of any mathematical achievement, even in the most abstract fields. If the crystallized deductive form is the goal, intuition and construction are at least the driving forces. A serious threat to the very life of science is implied in the assertion that mathematics is nothing but a system of conclusions drawn from definitions and postulates that must be consistent but otherwise may be created by the free will of the mathematician. If this description were accurate, mathematics could not attract any intelligent person. It would be a game with definitions, rules, and syllogisms, without motive or goal. The notion that the intellect can create meaningful postulational systems at its whim is a deceptive half-truth. Only under the discipline of responsibility to the organic whole, only guided by intrinsic necessity, can the free mind achieve results of scientific value…To establish once again an organic union between pure and applied science and a sound balance between abstract generality and colourful individuality may well be the paramount task of mathematics in the immediate future.


Courant is kinda saying the opposite here, by de-emphasising the need for purely abstract formalisms, or by saying that they have to go hand-in-hand with applications, or are worthless without applications. As a numerical analyst, he of course thinks "applications" of mathematics are tantamount to its existence.

The thesis in the linked article is that we need to emphasise the importance of purely abstract formalisms, not only in the context of possible applications.


What I meant is that he described the same problem in a more complete way, not that he just said the same thing more nicely. There are infinitely many formal systems and after all we choose only some particular ones, what guides this choice? Isn't the existence of models in the real world one of the most important factors in choosing which formalism is valuable to develop? Courant how I understand him is saying that the tension between the two approaches is vital to the development of mathematics.

There is also a nice lecture [1] from Vladimir Arnold, who was very much on the side of intuition, where he raises an interesting point that is related: in todays mathematics all the praise goes to the people who prove theorems, where perhaps the person who first _stated_ an theorem that is interesting and relevant deserves at least as much credit.

[1] http://www.msri.org/web/msri/online-videos/-/video/showVideo...


Fascinating paper. As a math graduate I had no idea that the switch to axiomatic descriptions and formal logical proofs was so recent and controversial, and no longer working in math I had no idea that "core" methodologies (i.e. pure mathematics) are under threat (although this makes perfect sense given grant funding priorities).


The author describes the old, pre-revolutionary way of doing mathematics as "relying on intuition and physical experience." In certain fields, such as algebra and number theory, the old way of doing it had another important trait: to prove the existence of something, such as the greatest common divisor or the unique factorization of a polynomial, one had to give an algorithm to construct it. The new, axiomatic way of thinking, by contrast, is content with showing that the assumption of non-existence leads to a contradiction. The rise of symbolic computation over the last few decades (Mathematica, Maple, etc.) can therefore be seen as a comeback of the old style of mathematics. I call it Kronecker's revenge. With respect to math education, well, considering the importance of computing, I am inclined to think that we should pay more attention to the 19th century way of thinking rather than less.


Most mathematicians are sympathetic to utilitarian constructive mathematics, i.e. finding constructive proofs for the sake of actually computing the results in applications, as opposed to some philosophical reason. However, nonconstructive mathematics (or at least mathematics that cares little for avoiding nonconstructive arguments) will probably continue to dominate mathematical research, since many major research areas now depend heavily on its results as a basis for a conceptual framework.


You can get a pretty good feel for the practical level of sympathy by popping down to your local math library and looking at the checkout histories of the works of Brouwer [1] and Bishop [2].

[1] http://plato.stanford.edu/entries/brouwer

[2] http://en.wikipedia.org/wiki/Errett_Bishop

'Sparse' would be an understatement & poor old Errett got such wonderfully backhanded reviews as:

> "Even those who are not willing to accept Bishop's basic philosophy must be impressed with the great analytical power displayed in his work."

and

> Bishop's historical commentary is "more vigorous than accurate".


Well, both Brouwer and Bishop fall into the category of constructivism for philosophical reasons. As I mentioned, this doesn't garner much sympathy from ordinary mathematicians. Brouwer and Bishop wrote against the practice of classical mathematics with considerable vitriol, as opposed to proposing constructive mathematics as an additional lens for viewing classical results. In the era of widespread use of computers to perform calculations, the advantages of this latter point of view are obvious.


That's a fair assessment. Still it's a shame their work more or less petered out due to politics, personality and philosophical disdain, had they played better with others and taken a slightly different path they could well have been better regarded than they are now.


I don't understand your comment. The entire article is about why "relying on intuition and physical experience" just wasn't good enough -- how the difficulties inherent in 19th-century style mathematics forced mathematicians to evolve towards a modern modern style.

Algebra and number theory are both heavily reliant on modern mathematical thinking. Abstract algebra is practically the defining example of it.

How does one create Mathematica and Maple without modern mathematical thinking? Indeed, formal logic is rather closely involved in computing generally.

How does a precollege mathematics system that is based on a 19th century style of mathematics pay more attention to 19th century style mathematics?

I find your confident dismissal of modern mathematics to be vexing and oddly misaimed.


I beg to differ. As software itself has grown, it's developed an enormous need for axiomatic methods because of the benefits it can confer for software reliability. Simply, we want to prove that a program is wrong at the earliest stage possible - "intuition and physical experience" is also known as "the program failed in production."

Although we've invented a number of algorithmic methods to find bugs in software, the long-term trend is to find "better axioms" to define the program against, which reduce both the code size and complexity - that's the goal otherwise known as "programming language design." We don't want people to code in 19th century fashion, because it sucks.


Mathematics education is a bit odd now for sure. I think if there was an effort to teach kids the logical and even creative side of math early on, they'd have an easier time building intuition for certain computational aspects later on. As it stands, most people are unaware that math is no longer predicated in reality, which seems nonsensical. You generally teach grammar before (or along with) composition.


I'm sorry, but could you please explain what you mean by "predicated in reality"? I feel stupid, but I just don't understand what that means, and I'd rather feel stupid than remain ignorant.


It means that a connection to the real world is not needed for mathematics to exist. Another way of saying "predicated in" would be to say "based in" or "required by."


"To a first approximation the method of science is 'find an explanation and test it thoroughly', while modern core mathematics is 'find an explanation without rule violations'."

This kind of rings a bell for me here concerning Formal Verification methods (i.e. proofs), possibly more popular with functional programming techniques.


I think physics is going in that direction too. The paper explains how math went from experimental to formal proofs. Now, for better or worse, science wants to extrapolate to universes from quantum particles.

Schrodinger's cat was originally proposed by Schrodinger to ridicule the many word's extrapolation of quantum mechanics. Now it is often used to support it. Scientists create multiple dimensions, universes, etc. based on very little real world data.


Schrodinger's cat is just like the old school intuition that is subject to errors. It is also why physics may have a better public image than math, due to the higher availability of intuitive ideas in describing physical phenomena. We wouldn't need a simultaneously alive and dead cat if people were more educated about the idea of a random variable. Heck, five seconds on wikipedia leads to "a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability."


The paradox Schrodinger pinpoints in his famous thought experiment cannot be explained by a simple appeal to random variables. Such variables would be the "hidden variables" that Einstein and others attempted to find to explain Quantum entanglement.

Von Neumann thought he proved that hidden variables were impossible, but In the 1970's, John Bell introduced Bell's Inequality, and showed us that hidden variables can only be used to explain Quantum entanglement if one gives up on naive locality.

Things get hairier from there when you try to combine non-local theories (such as Bohm's) with relativity.

It's complicated.


I'd like to point out that there is a difference between the law of the excluded middle and proof by contradiction.

The law of the excluded middle says that for every x, either x or not x is true. Proof by contradiction says that if given not x you can derive a contradiction, then x is true.


He basically says that Hilbert relied on the law of the excluded middle by using proofs by contradiction, which is valid, because if you do not accept the law of excluded middle than proof by contradiction ceases to be a valid proof method.


This is the sentence in question:

The first controversy occurred early in Hilbert’s career and concerned his vigorous use of the "law of the excluded middle" (proof by contradiction).

I don't see how you can interpret that as not implying that the law of the excluded middle and proof by contradiction are not one and the same thing.

if you do not accept the law of excluded middle than proof by contradiction ceases to be a valid proof method.

I don't believe that this is true, because that would imply that you can derive the law of the excluded middle from proof by contradiction.



The law of noncontradiction and proof by contradiction are different things.


In a logical system you have axioms, that are sentences that are true by definition and proof methods which are rules of sentence transformation that are assumed to preserve truthiness. The law of noncontradiction is an axiom that states:

X and (not X) == False for any sentence X

If you start with a sentence y, and then via the proof methods arrive at a sentence (not y), this is the same as saying

(y -> (not y)) == True

Which is equivalent to:

((not y) or (not y)) == True

Which is equivalent to:

(not y) == True

Now, if you accept the law of non-contradiction this implies that:

y == False

If you do not accept it, y can both True and False at the same time and the proof method is not valid anymore. Hence the proof method of reductio ad absurdum, or "proof by contradiction" as you call it, is valid if and only if this law holds.


The law of excluded middle is not sufficient for proof by contradiction. What you actually need is ex falso quodlibet (principle of explosion).

For instance, paraconsistent logic has the law of excluded middle but not ex falso quodlibet.

https://en.wikipedia.org/wiki/Principle_of_explosion

https://en.wikipedia.org/wiki/Paraconsistent_logic

--

addendum

Specifically, notice that in classical logic the axioms law of excluded middle and law of non-contradiction can be substituted for one another (they are equivalent). However, that is not the case for all semantics.


I just wanted to explain the authors mental leap, I originally said: if you do not accept the law of excluded middle than proof by contradiction ceases to be a valid proof method. The law of excluded middle might not be sufficient, but is necessary for the reductio ad absurdum and proof by contradiction methods. You are right though I jumped too quickly to the "if and only if", the details get quite complicated.


My point is that you can add reductio ad absurdum as an axiom. So long as you can't then derive the law of the excluded middle, your assertion is incorrect.


Reductio ad absurdum is:

(x->(~x))->(~x)

Law of excluded middle is:

x | ~x

Proof follows:

(x->(~x))->(~x)

From definition of implication is equivalent to:

(~x) | ~(x->(~x))

From definition of implication is equivalent to:

(~x) | ~((~x) | (~x))

From de Morgans law is equivalent to:

(~x) | (x & x)

From identity law is equivalent to:

~x | x

I am not a logician and I might not have gotten all the details right, but I certainly think this is possible. I also have quite a good proof by authority ;) in form of an article by Alonzo Church:

http://www.ams.org/journals/bull/1928-34-01/S0002-9904-1928-...


That looks correct to me.

Actually, at first I thought it was incorrect because you used De Morgan's law, which I mistakenly thought was invalid in intuitionistic logic. However, I looked it up and apparently only ~(p & q) |- ~p v ~q is invalid in IL.


Thanks, that was the part I felt uncertain about, funny how this seems completely elementary on one hand and on the other it is so easy to make a mistake.


Can you justify using material implication?

It's possible to create reductio ad absurdum in Coq:

    Definition reductioAdAbsurdum (X:Prop) (f : (X -> (X -> False))) (x:X) : False := f x x.
But not the law of the excluded middle.

Edit: It's much clearer in Idris:

    reductioAdAbsurdum : (x -> (x -> _|_)) -> (x -> _|_)
    reductioAdAbsurdum f x = f x x


I am assuming that we are talking about taking the law of excluded middle out from classical logic, where all the other things independent from that law still hold, and I think implication being material is independent.


I mean material implication as in:

In propositional logic, material implication is a valid rule of replacement which is an instance of the connective of the same name. It is the rule that states that "P implies Q" is logically equivalent to "not-P or Q".

http://en.wikipedia.org/wiki/Implication


Yes I understand, but why do you think it needs justification? As far as I understand this is simply the definition of implication in classical logic, for one it follows from the truth tables of both expressions.


OK, but we're not talking about classical logic, otherwise the question about whether the law of the excluded middle is derivable is uninteresting.


Well, that's exactly what people were considering and what the article is referring too, a variant of classical logic where everything except the law of excluded middle holds true. I don't think this is uninteresting, which postulates are independent from one another is one of the core problems in logic.

Nice flame, eot.


Actually, I guess he's right.

Arguably, the common sense definition of 'A -> B' is simply modus ponens. That is 'Given A and A->B, it follows that B'. It's then no longer obvious that A -> B is equivalent to ~A v B.

Indeed, implication in IL is different from material implication even though IL has modus ponens.


I didn't mean to flame, I thought we were having an interesting debate. I was under the impression we we're talking about logic systems in general where reductio ad absurdum exists, but the law of the excluded middle doesn't. Anyway, thanks, I learnt much.


This would be a very peculiar logic I think, maybe you just come from a very different background and hence all the hair-splitting was necessary, we just wandered so far away from the original point I lost patience, maybe unnecessarily, in the end it really was an interesting debate for me too.


Actually, this is wrong. Reductio ad absurdum should be:

    ((~x) -> x) -> x
or:

    ((x -> _|_) -> x) -> x
Which is not constructible in Coq/Idris.



I think stiff is correct here. You can derive the law of the excluded middle from proof by contradiction, obviously in classical logic but even in constructive logic.* See this blog post by Andrej Bauwer for an explanation: http://math.andrej.com/2010/03/29/proof-of-negation-and-proo...

* although there are logics in which this does not hold, but that's probably not what you are talking about here


I've tried to do so in Idris, but I haven't been able to:

    deriveExcludedMiddle : ((x:Type) -> ((x -> _|_) -> x) -> x) -> Either y (y -> _|_)
So, perhaps I am relying on one of those logics where it doesn't hold.


Here is the thing you're looking for. LEM is law of excluded middle, DN is double negation elimination law. Lets define some type abbreviations:

    LEM = (x:Type) -> Either x (x -> _|_)
    DN = (x:Type) -> ((x -> _|_) -> _|_) -> x
We first constructively prove not not LEM:

    notnotLEMproof : (x:Type) -> ((LEM x) -> _|_) -> _|_
    notnotLEMproof x f = f (Right (\a -> f (Left a))
Now we can use double negation to obtain a proof for LEM:

    DNimpliesLEM : DN -> LEM
    DNimpliesLEM dn x = dn (LEM x) (notnotLEMproof x)
I hope this is correct, since I don't know Idris syntax. If Idris supports implicit parameters, you would not need the x's:

    notnotLEM f = f (Right (\a -> f (Left a))

    DNimpliesLEM dn = dn LEM notnotLEMproof


Thank you ever so much. I only had to make a few changes to make it work. Providing a computer verified proof is quite a definitive way of winning an argument.

    LEM : (x:Type) -> Type
    LEM x = Either x (x -> _|_)

    DN : (x:Type) -> Type
    DN x = ((x -> _|_) -> _|_) -> x

    notnotLEMproof : {x:Type} -> ((LEM x) -> _|_) -> _|_
    notnotLEMproof {x} f = f (Right (\a => f (Left a)))

    DNimpliesLEM : ((x:Type) -> DN x) -> LEM y
    DNimpliesLEM {y} dn = dn (LEM y) notnotLEMproof


Yes, thank you, I made some stupid mistakes there.

I find the last one slightly clearer if you change the argument order:

    DNimpliesLEM : ((x:Type) -> DN x) -> ((y:Type) -> LEM y)
    DNimpliesLEM dn y = dn (LEM y) (notnotLEMproof y)
Although with the original formulation you could prove the slightly stronger version:

    DNimpliesLEM : (x:Type) -> DN (LEM x) -> LEM x
    DNimpliesLEM x dn = dn (notnotLEMproof x)
so that you don't need DN z to be true for all z, but just on the z you need (i.e. z = LEM x). It was an interesting exercise, and I learned a lot. First I tried proving (((x:Type) -> LEM x) -> _|_ ->) _|_ to then apply double negation directly to this, but this is actually not possible (as far as I can see). The double negation needs to be inside the quantifier.

Maybe you can even write it simply like this with implicit parameters:

    DNimpliesLEM : DN (LEM x) -> LEM x
    DNimpliesLEM dn = dn notnotLEMproof


Shouldn't that be ((x -> _|_) -> _|_) -> x) and the y's at the end should presumably x's.

And shouldn't the quantification over x:Type be separately over each part?


(((x -> _|_) -> _|_) -> x) would be the type of double negative reduction. I'm not entirely sure what the type should be. Perhaps, (((x -> _|_) -> (y, y -> _|_)) -> x) is more accurate. In any case, each of the three types are equivalent.

I wanted:

    (forall x. ((~x) -> x) -> x) -> (forall y. y \/ (~y))
Rather than:

    (forall x. (((~x) -> x) -> x) -> x \/ (~x))


Proof by contradiction goes like this: You assume ~x, and from this you derive a contradiction _|_. Then you conclude x. So from ~x -> _|_ we derive x. In other words: (~x -> _|_) -> x. That's the same as ((x -> _|_) -> _|_) -> x. But in any case, as you said, this is equivalent to (~x -> x) -> x because ~x -> x is equivalent to ~x -> _|_.


I think strictly speaking, _|_ isn't a contradiction, it's an absurdity. (x, x -> _|_) is a contradiction.


Ah I see, that's where the terminological differences came from.



Abraham Robinson's Non-standard analysis (http://en.wikipedia.org/wiki/Non-standard_analysis) used a modern axiomatic approach to legitimize Isaac Newton's intuitive use of infinitesimals, by creating a superset of the real numbers, some of which were smaller than any ordinary real (infinitesimals), or larger (infinite).

He then went on to prove new things with it.

And he started out as an engineer.


Vague and unenlightening. I kept skimming through looking for the part where where it explains what this "revolution" is supposed to be but it never actually does. Edit: objection withdrawn.


The revolution was clearly explained: it's the formalism viewpoint, most famously expounded by Hilbert, "mathematics is a game played with meaningless symbols following rules on paper." What this means is that we can discuss mathematics discarding all personal biases and intuitions. We can give personal meaning to these meaningless symbols and abstract rules, but we don't need these biases when communicating to each other nor when we actually have to perform computations or deductions.

The example given at the end is fractions, you can think of fractions in terms of pies and pizzas, but it's much more effective to ignore any of that when it comes to actually perform computations with that and simply focus on the "meaningless" symbols and manipulations that you do with them.

It's a pity that you tl;dr'ed this.


Perhaps you shouldn't have skimmed. You will find what you need on the left column of page 32 (page 2 of the PDF).


There's a paper that stress how this transformation occurred well before in Germany, specially in Göttingen, named "Klein, Hilbert, and the Göttingen Mathematical Tradition" by David Rowe.

It makes a good point in how the changes initiated by Weierstrass and Cantor, by throwing away Geometry as a requirement to think in mathematics, ultimately lead the way to us to think in axioms and algorithms that's basically is how we do mathematics today. (The part about Cantor and Weierstrass is not stressed in the paper though).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: