When mathematicians are obsolete, computer scientists will be obsolete. When computer scientists are obsolete, we have by definition entered the Singularity.
So my take is that he's not necessarily wrong, but if he's right then lack of employment for mathematicians won't be our biggest concern.
I actually think Watson would be far more useful to lawyers than to mathematicians. With a little tweaking, he could probably even identify all the non sequiturs in your comment.
Imagine a future where, in addition to checking your spelling and grammar, your computer could check your logic too!
To me, singularity is hypothetical nonsense that shows a clear misunderstanding of engineering problems. Computer systems have not become much more sophisticated in the past 40 years, we have just made them bigger. We keep running into the same mathematical and physical limitations that will not suddenly disappear when machines can improve themselves. In fact, even if such a machine was possible it probably would not get much further than a few generations after which the limitations have become orders of magnitude more complicated compared to the sophistication of the machine. (welcome to engineering)
I'm not saying that the singularity isn't some kind of "nerd rapture", but it may a bit disingenuous to say computer systems haven't gotten much more sophisticated. Sure, we're still using Von Neumann architecture and transistors, but software has gotten massively more interesting in several different directions.
Computing will continue to aid mathematical discovery, proving (useful) theorems like Four Color, or even formulating their own (useful) theorems before proving them.
For example, in Einstein's Special Theory, there's not a lot of deep mathematics (relative to the General Theory) so it's not too difficult to imagine Watson taking the problem and axiomatizing (just to see what would happen) that light speed is always constant (which is the key to everything else that followed).
But what about Descarte's insight that geometry could be described by algebra using a coordinate system, giving birth to analytic geometry? Or proving that it can't prove all true statements in arithmetic (Godel)? Or even asking the question?
I think we're a long way off from doing something like that in a machine.
Indeed, mathematics requires a big deal of creativity. You could argue it is a somewhat subjective discipline, as interesting questions receive infinitely much more attention than a random question. And how do you define what is interesting?
There are an infinity of directions you can take -- which for humans is fine, because they know that you eventually have to end up with something your peers find interesting as well. How would a machine do this? I feel this could be akin to train a computer to enjoy music.
While computers will continue to become more and more involved in theorem proving, it's not clear to me that they will be able to ask interesting questions within the next few years.
On the other hand, journal papers keep pouring every day at rates that are impossible to keep up. In the future this situation will be hopeless in the absence of some AI to guide us among that mess. But then again, papers are written in a combination of natural language and mathematics.
> Descarte's insight that geometry could be described by algebra using a coordinate system...
This was a new model which made a large class of insights more tractable to human minds. A theorem-finding computer probably wouldn't come up with the same kind of model(unless we specifically programmed it to), it would come up with models which made further insights more tractable to theorem-finding computers. Already, today, a huge part of the problem with creative computers is interpreting their results for humans.
Automated proof tools are already causing a headache for mathematicians. The four color theorem was proved by breaking the problem down into 1,936 cases, and then proved each of the individually. This story has a happy ending: the result is now fairly well accepted and there have been numerous steps in simplifying the proof.
However, other automated proofs have not been so lucky: Hale's proof of the Kepler Conjecture was 300 pages of mathematics and 40,000 lines of supporting computer code, and the referees of the paper tried four years to understand it (they even set up a seminar series for all of the prerequisite material to understand the proof) before letting the authors know that they were unable to verify the correctness of the proof because they "ran out of energy to devote to the problem." (The Annals of Mathematics henceforth edited their rules to disallow submission of computer programs.) The Flyspeck project is to, well, ask a program to verify the correctness of the proof. Ah, the times we live in.
>Is this a possible goal? Note the one key insight that I feel the need to repeat: Thomas J. must not be perfect. It must not only put out correct formal proofs. It must be allowed to put out papers like we do.
So, he not only wants correct formal proofs of difficult theorems, but also insights and thoughtful conjectures? I think math is a red herring here, what he really wants is strong AI -- you can put 'writers' or 'economists' in his arguments and not much changes.
I struggled parsing that too. I think he intended the negation of "It must only put out correct formal proofs", but adding a "not" in there doesn't quite have that effect.
Consider the famous 100 year old paradox: this sentence is false.
Watson won't be able to assign a truth value to this gramatically correct sentence, but would it have the insight that it just stumbled upon a philosophical problem that's solved by narrowing the notion of truth it uses (Bertrand Russell).
In other words, a machine may not be able to examine itself in the way that humans can when certain problems arise.
Long ago, some people said computers would never beat humans at chess, and they were wrong. A human brain contains no magical components. It runs on physics. Whatever it does, can in principle be done by some other machine that runs on physics. I won't be surprised if someone finds a general way to replace metamathematical insight with clever search-and-pruning over a large but simply structured space, like it happened with chess.
Also, the limits of computation have been given an upper bound. If it's not computable by a Turing machine with a finite memory tape (or anything else that is mappable to a Turing machine with a finite memory tape), it's not computable by a physical computer.
Steve Yegge started exploring the metaphysics of computer programs before he stopped blogging; it makes for interesting reading.
There's a reason why my argument never mentioned Turing computability :-) If a human brain uses quantum effects to get cognitive bonuses, you can build a quantum computer that does same. If the brain uses super-duper unified field theory, you can build a machine to take advantage of that, and so on. The only assumption I make about the human brain is that it doesn't run on magic juju that's available exclusively to humans by the orders of Cthulhu, and I think that assumption is quite warranted.
(As far as I know, neurons can be approximated well enough by classical computers and you don't even need to go quantum, but that's another story.)
Am not going to argue whether a machine can "really" be alive, "really" be self-aware. Is a virus self-aware? Nyet! How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum.
> If it's not computable by a Turing machine ... it's not computable by a physical computer.
By a physical Turing equivalent computer. True.
The breakthrough will come when we'll be able to overcome the damnation of step-by-step-ness (which is the primitive expression of causality).
The undercurrent where he's trying to expand the accepted definition of "by trivial algebra" is interesting. Essentially Zeilberger argues that if you can plug something in to Mathematica and it can solve it, then it's true "by trivial algebra" by today's standards, and we should stop bothering to prove such results by hand.
Actually there has been a lot of research in this area. It's called automatic theorem proving. There have been some successes but not close to replacing mathematicians.
If you compare it to chess then the computer programs are still at the beginner level. First year math students will be able to prove things that the computer cannot, yet.
It's an intriguing area. Computer programs can perfectly well check whether a proof is correct, so in principle all you have to do is enumerate all "proofs" and pick out the valid ones. In practice the research is focused on doing this search more efficiently and intelligently.
Although I can't speak directly to whether mathematicians are in jeopardy, the future certainly looks good for patients needing access to highly accurate medical diagnostics.
Watson may represent the ultimate "second opinion."
I see a lot of stories today about how IBM is planning to adapt Watson technology for use in health care.
I absolutely believe that Watson could revolutionize health care, but not in the way most people are suggesting. Despite what you see on House MD, diagnosing mystery illnesses is not something that doctors struggle with.
However, the amount of money that Watson could recover in insurance and medicare fraud would absolutely dwarf what he won on Jeopardy. There's the real payoff...
Diagnosing strange mystery illnesses isn't common, but not doing as well as one could at diagnosing and treating routine illnesses is more common. Automating the diagnosis and treatment process would just be the next step in the evidence-based medicine movement, which is already moving in the direction of having doctors follow best-practice flow charts derived from statistical data.
There already have been cases of theorems being proven by computers. I don't know how much truly original work, but I remember (faintly) at least one case where the computer discovered an unknown way to prove an existing theorem.
Many years ago I was on a logic conference and one speaker proposed to make "a computer can solve it automatically" the definition of trivial (as in "this is trivial to solve").
As with the intended commercializations of Watson, it seems like augmenting humans will be more fruitful for mathematicians.
Specifically, a computer spitting out proofs of things that no one asked for doesn't seem especially productive. Humans conjecturing something interesting and getting computer generated proofs or counter-examples seems interesting and more plausible though.
Anyone that's tried to use a state-of-the-art automated theorem prover (like ACL2) knows we have a long, long way to go.
They're still pretty dumb.
That said, I'm convinced that Godels Incompleteness Theorems hold only in the limit -- the majority of propositions of human interest are probably amenable to mechanical proving.
While I don't doubt that in the future there may be some machine that can crunch logic at a ridiculous pace, Watson still seems far off from the AI the author here is alluding to. Machine learning of Watson's nature (classification problems, feature selection, etc.) is able to take data generated from the world and find interesting patterns to make a prediction. This is nowhere near the creation of new ideas, like Galois' insights into field theory or the work Perelman contributed toward the concept of a Ricci flow. I haven't yet heard how to create a machine whose thoughts "can wander" in a way that isn't hopelessly inefficient. That would be a big leap toward this type of AI.
The authors reduce the proving of a wide class of combinatorial identities, which were formerly regarded as extremely nontrivial, to a mechanical calculation. This has been implemented in Maple. Input your favorite identity, push the button, and watch the computer spit out a proof.
> It [the Watson-esque computer for mathematics] must not only put out correct formal proofs. It must be allowed to put out papers like we do. And these must be right often enough to be interesting, but they need not be perfect.
This answers the question, I believe. Are all human mathematicians in jeopardy? No.
May we need less human mathematicians in the future and maintain a strong mathematical community? Likely.
Why, because Watson gave a bunch of laughably wrong answers? That's not how the mathematician machine would work. It would combine brainstorming (probably via massive parallel statistical search), at which people are currently much better than machines, with rigorous proof checking, at which computers are much better than people.
Still, in 15 years? I'll wager against this prediction.
His proposal, Thomas J. is a brilliant idea! Sure, mathematical notation can get much more complex than that used for language but it has the preciseness that language lacks in syntax ("I shot an elephant in my pajamas. How he got into my pajamas I'll never know."), semantics ("Iraqi head seeks arms"), pragmatics, etc., which I think may compensate for this difficulty.
If a software system becomes better than the best humans at building new innovative software than there is going to be another "space race" but instead of space, it's going to be knowledge/intelligence. Some country will perfect the system and become the new world superpower overnight.
It means hold on to your hat Dorthy, cause Kansas is going byebye.
Hilbert tried to come up with a method of generating theorems by just cranking a wheel that churns out different logical forms. Ends up that we still need verification to make sure that these things actually make _sense_. (Even if we have incompleteness in any logical system that contains arithmetic, we still have to figure out whether a certain logical form "makes sense" to use.)
No, I think you're not granting enough charity to what I'm saying.
1. The incompleteness theorems basically say, "if you have a logical system that includes Peano arithmetic, even if the theory is consistent, it will be an incomplete system." Also, "if you're trying to show that said system is consistent within the system, then you have an inconsistent system." These statements drop the Hilbert program to a dead halt.
2. I'm not talking about a perfect theorem proving system. However, what I _am_ saying is that you still need humans to verify the results - we do have automatic theorem provers already, but we still need to make sure that the steps are logical. Not only that, but the theorem needs to connote something useful. Also, we tend to use axioms that we can't say are consistent within the system (for instance, the axiom of choice; yet, mathematicians use it like candy in order to show useful results in analysis, and weird, unintuitive results in topology [slicing a sphere and producing two spheres with the same volume.]) I understand that the article is saying, "yes, such a proof produced by a computer still needs to be independently verified." To be honest, I think a system that could make mistakes and whatnot is not necessarily what people would want to see - reason being that mathematicians probably have not much trouble trying to figure out the larger details and overall plan of attack.
3. The reason why I'm doubtful that we'd ever see such an AI for proving math statements is because the entire enterprise is still very much a creative one - you not only have to generate the logical forms somehow, but also figure out whether it gets you closer to your goal or not. I suppose you could have some sort of heuristic that could show that a generated theorem or lemma could in fact get you closer to your goal, but my god, math is filled with so many potholes and garden paths that could lead one astray.
Don't get me wrong, it'd be awesome to see such a system - however, I'm very doubtful that such a system could ever be created, and if it is created, that it'd be actually useful to practicing mathematicians.
No, I think you're not granting enough charity to what I'm saying.
Let's see.
The incompleteness theorems basically say, "if you have a logical system that includes Peano arithmetic, even if the theory is consistent, it will be an incomplete system." Also, "if you're trying to show that said system is consistent within the system, then you have an inconsistent system." These statements drop the Hilbert program to a dead halt.
Close but not quite. The incompleteness theorem says that a consistent set of axioms describes arithmetic can only be consistent if and only if it does not prove its own consistency. An inconsistent theorem proves absolutely everything. A consistent system is welcome to attempt to prove its own consistency all it likes - it will just fail.
It is true that this killed Hilbert's program as originally conceived.
I'm not talking about a perfect theorem proving system. However, what I _am_ saying is that you still need humans to verify the results - we do have automatic theorem provers already, but we still need to make sure that the steps are logical.
At some point, I fail to see why you need the humans. What value are humans actually providing?
Not only that, but the theorem needs to connote something useful.
Mathematicians already have a concept of "useful" that is so far at odds with the common understanding of the term that I honestly cannot make sense of what "useful" actually means in this context. If the program is able to tackle actual difficult research problems, and has heuristics that suffice for real problems, then its notion of "useful" is probably good enough for practice.
Also, we tend to use axioms that we can't say are consistent within the system (for instance, the axiom of choice; yet, mathematicians use it like candy in order to show useful results in analysis, and weird, unintuitive results in topology [slicing a sphere and producing two spheres with the same volume.])
Outside of logic, most of mathematics has agreed on the set of axioms to use. Namely ZFC. As for using axioms that are not consistent within the system, that is absolutely necessary by the incompleteness theorem. Though you picked an ironically bad example. Gödel proved that ZF is consistent if and only if ZFC is consistent, and therefore the axiom of choice does not affect the consistency of the axiom system. It might not describe the set theory we want to describe, but it does not lead to contradictions.
I understand that the article is saying, "yes, such a proof produced by a computer still needs to be independently verified." To be honest, I think a system that could make mistakes and whatnot is not necessarily what people would want to see - reason being that mathematicians probably have not much trouble trying to figure out the larger details and overall plan of attack.
That was one of the weaker points made. I see the value of having heuristics that can go astray. But if you've got this in a computer, hook the heuristics up to a theorem prover that can fill in the details and come up with verified theorems. And which can alternately can describe useful plans of attacks.
I understand that the article is saying, "yes, such a proof produced by a computer still needs to be independently verified." To be honest, I think a system that could make mistakes and whatnot is not necessarily what people would want to see - reason being that mathematicians probably have not much trouble trying to figure out the larger details and overall plan of attack.
Until people trust the computer, it absolutely needs to be double-checked. Any piece of software can have horrible bugs that are hard to find. No matter how confident its designers are, it needs verification.
The reason why I'm doubtful that we'd ever see such an AI for proving math statements is because the entire enterprise is still very much a creative one - you not only have to generate the logical forms somehow, but also figure out whether it gets you closer to your goal or not. I suppose you could have some sort of heuristic that could show that a generated theorem or lemma could in fact get you closer to your goal, but my god, math is filled with so many potholes and garden paths that could lead one astray.
Yes, it looks as far away from being possible now as beating Jeopardy looked back when Deep Blue beat Kasparov. Based on my experience with mathematics and computing I find myself in agreement with rjlipton that it is not impossible, and in fact I wouldn't be surprised to see it happen within 15 years.
Here is an interesting back of the envelope calculation for you. A human brain has 100 billion neurons, each of which has 7000 connections, and fires about every tenth of a second. Let's suppose that emulation averages 1000 clock cycles per synapse. The result is that simulating something as complex as the human brain in real time should take on the order of 1011 * 7000 * 1000 = 71017 clock cycles per second. Watson was running at 81012 clock cycles per second. If Moore's Law continues to hold for 16 generations, which is 24 years, then a computer the size of Watson should be able to match the human brain, in real time.
Once we have the hardware, I think it is only a question of time until the necessary software is available. And given the demonstrated power of statistical analysis of large data sets to real problems (think Watson, Google translate, and the like), I think we're on our way to developing appropriate software as well.
Don't get me wrong, it'd be awesome to see such a system - however, I'm very doubtful that such a system could ever be created, and if it is created, that it'd be actually useful to practicing mathematicians.
I think that rjlipton is a little optimistic on the time frame. But I fully expect to see it arrive in my lifetime. I have some trepidation about the inevitable economic upheaval when it does happen.
So my take is that he's not necessarily wrong, but if he's right then lack of employment for mathematicians won't be our biggest concern.