Thanks in part to the popularity of his books, movie, and speeches, Kurzweil now knows pretty much every AI researcher in the planet, and we can safely assume he's aware of even very obscure research projects in the field, both inside and outside academia.
Joining Google gives him ready access to data sets of almost unimaginable size, as well as unparalleled infrastructure and skills for handling such large data sets, putting him in an ideal position to connect researchers in academic and corporate settings with the data, infrastructure, and data management skills they need to make their visions a reality.
According to the MIT Technology Review[1], he will be working with Peter Norvig, who is not just Google's Director of Research, but a well-known figure in AI.
I just can't see Kurzweil being in the same league as Peter Norvig. Sure, he did some interesting work a long time ago, before he got weird. I can't see this working out well for Google, unless they just want a famous figurehead.
I'm reposting this comment I made a couple of months ago. He's no John McCarthy, but he was a true pioneer in the commercial applications of AI:
==================================
I don't think that's a very fair assessment of Kurzweil's role in technology.
He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.
As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.
"Oh wow, computers can read printed documents and recognize the characters!"
"Oh wow, computers can read written text aloud!"
"Oh wow, computers can recognize speech!"
"Oh wow, computer synthesizers can sound just like pianos now!"
I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.
In addition, I'd rank Minsky, Larry Page, Bill Gates, Dean Kamen, Rafael Reif, Tomaso Poggio, Dileep George, and Kurzweil's other supporters as much more qualified to judge the merits of his ideas, than Kurzweil's detractors like Hofstadter, Kevin Kelly, Mitch Kapor, and Gary Marcus. It seems that Hofstadter is the only one of that group who is really qualified to render a verdict.
I believe your summary suffers from selection bias.
I think most people have views that are controversial. Only when one is famous do others hear about those controversial views. Furthermore, just about every famous project has its distractors, leading to controversy.
Take Stephen Hawking as an example. He doesn't believe there was a god who created the universe. That's a controversial view to many. But when an non-famous atheist says exactly the same thing, few take notice, so you don't hear about those people.
Take Alan Kay as another example. He's one of the key people behind OLPC, which given its criticism could be considered controversial. But name any big project which has neither criticism nor controversy.
Ole Kirk Christiansen founded Lego. He was visionary in that he saw a future in plastic bricks as toys for children. What were his controversial views? I have no clue. But he probably had some. Perhaps his motto "Only the Best is Good Enough" is controversial to someone who believes that that second best may be good enough for some cases.
While I believe that everyone has controversial views. Do you think god exists? Do you think there should be more gun ownership? Or less? Do you think abortion should be banned? Only available for a few cases. Up to the mother to decide? But only until the third trimester?
Should there be public drinking of beer? Public nudity? Public urination? Public displays of affection?
Should women always have their heads covered while in public? What about men? Should we ban male circumcision until the male is old enough to make the decision for himself? Should we have the draft? What about mandatory civil service?
Do you believe in mandatory bussing? Separate but equal? Co-ed schools or sex segregated schools? State income tax or not? Legalized gambling? What if it's only controlled by the state? Should alcohol sales only be done by the state, or can any place sell vodka? What about beer? Should alcohol sales be prohibited within a certain distance of schools? Are exceptions allowed?
Do vaccines cause autism? Was the Earth created less than 10,000 years ago? Can you petition the Lord with prayer? Is the Pope God's representative on Earth, or the anti-Christ? Should non-believers be taxed at a higher rate than believers?
Should I go on? All of these are controversial. If you have views one way or the other, then your views will be controversial at least to some, if not to most. And if you have no views on a topic then that itself can be controversial. As the morbid joke goes, during the height of the Troubles in Ireland: "yes, but are you a Catholic Jew or a Protestant Jew"?
If everyone has controversial views, then of course they are precondition for being a visionary. They are also a precondition for not being a visionary. Name one famous (so I have a chance of knowing something about that person) non-visionary who did not have controversial views.
But first, name a controversial view of the founder of Lego ... who is definitely described as a visionary.
The degree of controversy is obviously diminished when someone has been successful. But I'll take a stab at the last question...
As for controversy with LEGO's founder:
1. Structuring the company around "doing good" instead of profitability and other more "corporate values". Google gets flack for this to this day, and LEGO almost went broke following this tenet until they revamped the corporate structure to follow profitability instead.
2. Use of plastics instead of wood, deviating from the company's original product base. Surely, that's what paved the way for LEGO, but I'm sure it was somewhat controversial switch in some circles, not least of which were carpenters and some employees.
3. LEGO's many legal battles and use of patents might be construed as controversial in some circles.
More to the point of the OP, it's hard to be a visionary if your view does not in any way shape or form deviate from the norm. Deviation from the norm is what sets the visionary apart, hence it's some times said that visionaries are controversial, because this deviation from the norm more often than not causes controversy in the areas in which they are deviating.
"Doing good" is a standard crafter/engineer approach, so it's not like that alone is visionary. My Dad's phrase "do your best or don't do it at all." Was he a visionary?
He wanted the whole world to be Christian, and went to Ecuador to work as a missionary. Did that make him and my Mom (and my dad's parents (also missionaries) and various other of my extended family) visionaries? What about all of the Mormons who do their two years of missionary work?
Consider also all the people who were visionary, tried something, and failed. In part, perhaps, because their vision wasn't tenable. You don't hear about all of those visionary chefs who had a new idea for a restaurant, only to find out that it wasn't profitable.
Add all those up, and there are a lot of visionaries in the world. Enough that the non-visionaries are the exception.
Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.
I'm sure he is a smart guy, but I think we have put him on a pedestal when he probably is not as remarkable as we want him to be.
I don't mean to pick on you (and I certainly didn't downvote you), but you seem like a posterboy for just how easy it is to take inventions and innovation for granted after the fact.
I find it instructive to occassionally go to Youtube and load up commericals for Windows 95, 3.1, the first Mac, etc., or even to dust off and boot up an old computer I haven't touched for decades. Not to get too pretentious, but it's a bit like Proust writing about memories of his childhood coming flooding back to him just from the smell of a cake he ate as a child.
When you really make a concerted effort to remember just how primitive previous generations of computing were, I think it puts Kurzweil's predictions and accomplishments in a much more impressive context.
I posted some other thoughts about Ray's track record awhile back:
==========================================
I read his predictions for 2009 only a couple of years before they were supposed to come about (which he wrote in the late 90s), and many seemed kind of far fetched - and then all of a sudden the iPhone, iPad, Google self-driving car, Siri, Google Glass, and Watson come out, and he's pretty much batting a thousand.
Some of those predictions were a year or two late, in 2010 or 2011, but do a couple of years really matter in the grand scheme of things?
Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then.
Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.
Even if a few of his more outlandish predictions like immortality are a few decades - or even generations - off, I think the road map of technological progress he outlines seems pretty inevitable, yet still awe inspiring.
"Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary"
There have been predictions of self-driving cars for more than half a century. It's in Disney's "Magic Highway" from 1958, for example. There was an episode of Nova from the 1980s showing CMU's work in making a self-driving van.
Researching now, Wikipedia claims: "In 1995, Dickmanns´ re-engineered autonomous S-Class Mercedes-Benz took a 1600 km trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 175 km/h on the German Autobahn, with a mean time between human interventions of 9 km, or 95% autonomous driving. Again it drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without human intervention."
You'll note that 1995 is before "the late 90s." It's not much of a jump to think that a working research system of 1995 could be turned into something production ready within 20 years. And you say "a year or two late", but how have you decided that something passes the test?
For example, Google Glass is the continuation of decades of research in augmented reality displays going back to the 1960s. I read about some of the research in the 1993 Communications of the ACM "Special issue on computer augmented environments."
Gibson said "The future is already here — it's just not very evenly distributed." I look at your statement of batting a thousand and can't help but wonder if that's because Kurzweil was batting a thousand when the books was written. It's no special trick to say that neat research projects of now will be commercial products in a decade or two.
Here's the list of 15 predictions for 2009 from "The Age of Spiritual Machines (1999)", copied from Wikipedia and with my commentary:
* Most books will be read on screens rather than paper -- still hasn't happened. In terms of published books, a Sept. 2012 article says "The overall growth of 89.1 per cent in digital sales went from from £77m to £145m, while physical book sales fell from £985m to £982m - and 3.8 per cent by volume from £260m to £251m." I'm using sales as a proxy for reads, and while e-books are generally cheaper than physical ones, there's a huge number of physical used books, and library books, which aren't on this list.
* Most text will be created using speech recognition technology. -- way entirely wrong (there's goes your 'batting 1000')
* Intelligent roads and driverless cars will be in use, mostly on highways. -- See above. This is little more common now than it was when the prediction was made.
* People use personal computers the size of rings, pins, credit cards and books. -- The "ring" must surely be an allusion to the JavaRing, which Jakob Nielsen had, and talked about, in 1998, so in that respect, these already existed when Kurzweil made the prediction. Tandy sold pocket computers during the 1980s. These were calculator-sized portable computers smaller than a book, and they even ran BASIC. So this prediction was true when it was made.
* Personal worn computers provide monitoring of body functions, automated identity and directions for navigation. -- Again, this was true when it was made. The JavaRing would do automated identity. The Benefon Esc! was the first "mobile phone and GPS navigator integrated in one product", and it came out in late 1999.
* Cables are disappearing. Computer peripheries use wireless communication. -- I'm mixed about this. I look around and see several USB cables and power chargers. Few wire their house for ethernet these days, but some do for gigabit. Wi-fi is a great thing, but the term Wi-Fi was "first used commercially in August 1999", so it's not like it was an amazing prediction. There are bluetooth mice and other peripherals, but there was also infra-red versions of the same a decade previous.
* People can talk to their computer to give commands. -- You mention Siri, but Macs have had built-in speech control since the 1990s, with PlainTalk. Looking now, it was first added in 1993, and is on every OS X installation. So this capability already existed when the prediction was made. That's to say nothing of assistive technologies like Dragon which did supported text commands in the 1990s.
* Computer displays built into eyeglasses for augmented reality are used. -- "are used" is such a wishy-washy term. Steve Mann has been using wearable computers (the EyeTap) since at least 1981. Originally it was quite large. By the late 1990s it was eyeglasses and a small device on the belt. It's no surprise that in 10 years there would be at least one person - Steve Mann - using a system where the computer was built into the eyeglasses. Which he does. A better prediction would have been "are used by over 100,000 people."
* Computers can recognize their owner's face from a picture or video. -- What's this supposed to mean? There was computer facial recognition already when the prediction was made.
* Three-dimensional chips are commonly used. -- No. Well, perhaps, depending on your definition of "3D." Says Wikipedia, "The semiconductor industry is pursuing this promising technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid."
* Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space. -- No.
* A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. PS3s came out in 2006 with a peak performance estimated at 2 teraflops, giving the hardware industry several years of buffer to achieve Kurzweil's goal.
* There is increasing interest in massively parallel neural nets, genetic algorithms and other forms of "chaotic" or complexity theory computing. -- Meh? The late 1990s, early 2000s were a hey-day for that field. Now it's quieted down. I know 'complexity'-based companies in town that went bust after the dot-com collapse cut out their funding.
* Research has been initiated on reverse engineering the brain through both destructive and non-invasive scans. -- Was already being done long before then, so I don't know what "initiated" means.
* Autonomous nanoengineered machines have been demonstrated and include their own computational controls. -- Ah-ha-ha-ha! Yes, Drexler's dream of a nanotech world. Hasn't happened. Still a long way from happening.
So several of these outright did not happen. Many of the rest were already true when they were made, so weren't really predictions. How do you draw the conclusion that these are impressive for their insight into what the future would bring?
First of all, I would strongly encourage anyone who is interested to check out Kurzweil's 2009 predictions in his 1999 book Age of Spiritual Machines, rather than this Wikipedia synopsis. It puts his predictions in a much more accurate context. You can view much of it here: http://books.google.com/books?id=ldAGcyh0bkUC&pg=PA789...
Quite a few of your statements relate to technological adoption vs. technological capability, such as everyday use of speech recognition and ebooks. I clearly stated that Kurzweil is not perfect at predicting what technologies will catch on with consumers and organizations, nor is anyone for that matter. To me, and to most of the people reading this, the most interesting aspect of Kurzweil's predictions is always what technological capabilities will be possible, rather than the rate of technological adoption.
Some of your other statements conflate science fiction with what Kurzweil does: "There have been predictions of self-driving cars for more than half a century. It's in Disney's 'Magic Highway' from 1958, for example." Similarly, most of your other points attempt to make the case that because nascent research projects existed, all of his predictions should have been readily apparent. I'm sorry, but this is pretty much the same hindsight bias displayed by gavanwoolery and Kurzweil's worst critics. Basically, Kurzweil's predictions are incredulously absurd to you, until they become blindingly obvious.
You can point to obscure German R&D projects all you want (and who knows how advanced that prototype was, or how controlled the tests were), but I was blown away by the Google self driving car, as were most of the people on HN based on the enthusiasm it received here. I thought it would take at least a decade or so before people took it for granted, buy you've set a Wow-to-Meh record in under a year.
Once again, I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer. Zip drives, massive boot times, 5 1/4 floppy disks, EGA, 20mb external hard drives the size of a shoe box, 30 minute downloads for a single mp3 file, $2,000 brick phones, jpgs loading up one line of pixels a second, etc.
To be clear, I'm not a Kurzweil fanboy. He's not some omniscient oracle, bringing down the future on stone tablets from the mount. What he is is a meticulous, thoughtful, and voracious researcher of technological R&D and trends, and a reasonably competent communicator of his findings. I'm very familiar with the track records of others who try and pull off a similar feat, and he's not perfect, but he's far and away the best barometer out there for the macro trends of the tech industry. If his findings were so obvious, why is everyone else so miserable at it? Furthermore, his 1999 book was greeted with the same skepticism and incredulity that all of his later books were.
For some of your other points, I've included links below:
Research has been initiated on reverse engineering the brain - Kurzweil was clearly talking about an undertaking like the Blue Brain project. Henry Markram, the head of the project, is predicting that around 2020 they will have reverse engineered and simulated the human brain down to the molecular level:
A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. Pretty much all of Kurzweil's predictions boil down to Moore's law, which he would be the first to admit. I'm not sure what you're trying to say.
Autonomous nanoengineered machines have been demonstrated and include their own computational controls. - If you read his prediction in context, he's clearly talking about very primitive and experimental efforts in the lab, which we are certainly closing in on:
http://en.wikipedia.org/wiki/Nadrian_Seeman - If you've been following any of Nadrian Seeman's work on nanobots constructed with DNA, Kurzweil's predictions seem pretty close
Okay, I looked at some of the "reasonably unbiased job of grading his own predictions." I'll pick one, for lack of interest in expanding upon everything.
He writes: “Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry.” When I wrote this prediction in the 1990s, portable computers were large heavy devices carried under your arm.
But I gave two specific counter-examples. The JavaRing from 1998 is a personal computer in a ring, and the TRS-80 pocket computer is a book-sized personal computer from the 1980s, which included BASIC. So the first part, "are available", was true already in 1999. Because "are available" can mean anything from a handful to being in everyone's hand.
Kurzweil then redefines or expands what "personal computer" means, so that modern iPods and smart phones are included. Except that with that widened definition, the cell phone and the beeper are two personal computers which many already had in 1999, yet were not "large heavy devices carried under your arm", and which some used as fashion statements. I considered then rejected the argument that a cell phone which isn't a smart phone doesn't count as a personal computer, because he says that computers in hearing aids and health monitors woven into undergarments are also personal computers, so I see no logic for excluding 1990s-era non-smart phones which are more powerful and capable than a modern hearing aid.
There were something like 750 million cell phone subscribers in the world in 2000, each corresponding to a "personal computer" by this expanded definition of personal computer. By this expanded definition, the 100 million Nintendo Game Boys sold in the 1990s are also personal computers, and the Tamagotchi and other virtual pets of the same era are not only personal computers, but also used as jewelry similar to how some might use an iPod now.
He can't have it both ways. Either a cell phone (and Game Boy and Tamagotchi) from 1999 is a personal computer or a hearing aid from now is not. And if the cell phone, Game Boy, etc. count as personal computers, then they were already "common" by 1999.
Of course, what does "common" mean? In the 1999 Python conference presentation which included the phrase "batteries included", http://www.cl.cam.ac.uk/~fms27/ipc7/ipc7-slides.pdf , the presenter points out that a "regular human being" carries a cell phone "spontaneously." I bought by own cell phone by 1999, and I was about the middle of the pack. That's common. (Compare that to the Wikipedia article on "Three-dimensional integrated circuit" which comments "it is not yet widely used." Just what does 'common' mean?)
Ha-ha! And those slides show that I had forgotten about the whole computer "smartwatch" field, including a programmable Z-80 computer in the 1980s and a smartwatch/cellphone "watch phone" by 1999!
I therefore conclude, without a doubt, that the reason why the prediction that "Personal computers are available in a wide range of sizes and shapes, ... " was true by 2009 was because it was already true in 1999.
As regards the "obscure German R&D project", that's not my point. A Nova watching geek of the 1980s would have seen the episode about the autonomous car project at CMU. And Kurzweil himself says that the prediction was wrong because he predicted 10 years and when he should have said 20 years. But my comment was responding to the enthusiasm of rpm4321 who wrote "Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then."
I don't understand that enthusiasm when 1) the prediction is acknowledged as being wrong, 2) autonomous cars already existed using the 'primitive tech environment' of the 1990s, and 3) the general prediction that it would happen, and was more than science fiction, was widely accepted, at least among those who followed the popular science press.
"I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer"
I started working with computers in 1983. I have rather vivid memories still of using 1200 baud modems, TV screens as monitors, and cassette tapes for data storage. I even wrote code using printer-based teletypes and not glass ttys. My complaint here is that comments like "primitive" denigrate the excellent research which was already done by 1999, and the effusive admiration for the predictions of 1999 diminish the extent to which those predictions were already true when they were made.
Man, those are actually some pretty tame predictions, especially if most of them were already in some form of production. I guess the future ain't all it's cracked up to be.
>Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.
This is a problem common to other AI pioneers, including Norvig.
Text to speech is easy, right? You just get a bunch of white noise and squirt it out a speaker with a bit of envelope shaping. Short rapid burst is a 'tuh'. Or maybe a 'kuh'. Or a 'puh' or 'duh' or 'buh'. More gentle with a bit of sustain is a 'luh' or 'muh' sound. I had a speech synth on a CP/M computer that did this. You might understand what was being said, if you knew what was being said.
People had a lists of phonemes and improved those.
Then people experimented with different waveforms.
Why did all those people take so long to make the jump to biphones, to smoothing out the joins between individual phonemes?
You had the Japanese with their '5th generation' research who were physically modelling the human mouth, tongue, and larynx, and blowing air through it. (You don't hear much about the Japanese 5th generation stuff nowadays. I'd be interested if there's a list of things that come from that research anywhere.)
Saying "talking computers" is easy; doing it is tricky.
> By any measure the project was an abject failure. At the end of the ten year period they had burned through over 50 billion yen and the program was terminated without having met its goals. The workstations had no appeal in a market where single-CPU systems could outrun them, the software systems never worked, and the entire concept was then made obsolete by the internet.
I'm probably preaching to the wrong crowd here, but I'm not talking in hindsight-bias. I mean, even in the early days others were making similar predictions - not all of them were as vocal though. Also, I'm pretty sure he did not single-handedly invent all the listed things before anyone else had even thought of it -- as is the case with any invention, you probably have a few thousand people thinking about the idea or researching it before one person steps forward with a good implementation -- and I'm sure Kurzweil found inspiration in his colleagues work, and there were probably earlier implementations of his ideas.
This does not mean he was not smart, I am simply making a general truth: there are few "original" inventions, and many "obvious" inventions. If you do not think these things were obvious, how long do you think it would take for the next implementation to appear? I would bet 1-3 years at most. No single human is that extraordinary -- some just work harder than others at becoming visible.
>Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.
What does "predicting" a thing has with actually IMPLEMENTING it? Here, I predict "1000 days runtime per charge laptop batteries". Should I get a patent for this "prediction"?
No, text-to-speech, speech recognition and synthesis are not "fairly obvious applications of computer science that anyone could have pioneered". And even if it was so, to be involved in the pioneering of ALL three takes some kind of genius.
Not only that, but all three fields are quite open today, and far from complete. Speech recognition in particular is extremely limited even today.
Plus, you'd be surprised how many "anyones" scientists failed to pioneer such (or even more) "obvious applications". Heck, the Incas didn't even have wheels.
(That said, I don't consider Kurzweill's current ideas re: Singularity and "immortality" impressive. He sounds more like the archetypical rich guy (from the Pharaohs to Howard Hughes) trying to cheat death (which is a valid pursuit, I guess) than a scientist).
“Ray’s contributions to science and technology, through research in character and speech recognition and machine learning, have led to technological achievements that have had an enormous impact on society – such as the Kurzweil Reading Machine, used by Stevie Wonder and others to have print read aloud. We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we’re working on at Google.”
I'm more reluctant to trash Kurzweil, but this image hiring policy is taking on the look of some sort of bizarre Victorian menagerie where they keep old famous computer scientists in wrought iron cages for Googlers' amusement. It's like the Henry Ford museum, but they're collecting people. That's more than a little weird.
It could be argued that such things happen in academics too.
But this is not the same, famed people established themselves at Microsoft and AT&T. There wasn't this sort of hiring of celebrity. Someone of Kurzweil's stature could be doing his own research and simply hired on as a board member.
There are plenty of widely known people at Google, but seemingly in spite of Google rather than because of. Maybe that's a consequence of 20% time too. But if people's reputations are staked in things other than the company, the company seems to borrow more reputation than it makes.
>Sure, he did some interesting work a long time ago, before he got weird.
He was weird then too. That's why he did such interesting work. His work, combined with a lack of fame at that time, just kept the weird from showing through.
I suspect that genius is made up almost, but not quite, entirely of crazy.
Rather, genius is the subset of crazy recognized by society as useful. The moment of recognition is the point at which it becomes venerated instead of derided.
A lot of visionaries are misunderstood because they can "see" things the others can't at the time, and it doesn't make sense to them, which is why they think they are "weird" or "crazy". From that point of view, Richard Stallman was also a visionary, even if it took 2-3 decades before his visions of states or companies putting backdoors in your proprietary software became true.
Manager to programmers: "Hey that weird free software guy thinks we put "backdoors" in our software, whatever that means. Maybe he's right and our competitor do this already!! ...Quick, code a backdoor into our system too, we don't want to fall behind!"
> I just can't see Kurzweil being in the same league as Peter Norvig.
The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.
Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.
For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings, often with feathers included. It was only when they figured out that wings don't have to flap and don't need feathers that they actually got off the ground.
It's still flight, even if it's not done like a bird. Just because nature does it one way doesn't mean it's the only way.
(On a side note, multilayer perceptrons aren't all that different from how neurons work - hence the term "artificial neural network". But they also bridge to a pure mathematical/statistical background. The divide between them is not clear-cut; the whole point of mathematics is to model the world.)
Nobody knows how neurons actually work: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-b.... We are missing vital pieces of information to understand that. Show me your accurate C. Elegans simulation and I will start to believe you have something.
Perhaps in a hundred years, this is the argument: for several hundred years, inventors tried to learn to build an AI by creating artificial contraptions, ignoring how biology worked, inspired by an historically fallacious anecdote about how inventors only tried to learn to fly by building contraptions with flapping wings. It was only when they figured out that evolution, massively parallel mutation and selection, is actually necessary that they managed to build an AI.
> For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings...
To quote Jeff Halwkings
"This kind of ends-justify-the-means interpretation of functionalism leads AI researchers astray. As Searle showed with the Chinese Room, behavioral equivalence is not enough. Since intelligence is an internal property of a brain, we have to look inside the brain to understand what intelligence is. In our investigations of the brain, and especially the neocortex, we will need to be careful in figuring out which details are just superfluous "frozen accidents" of our evolutionary past; undoubtedly, many Rube Goldberg–style processes are mixed in
with the important features. But as we'll soon see, there is an underlying elegance of great power, one that surpasses our best computers, waiting to be extracted
from these neural circuits.
...
For half a century we've been bringing the full force of our species' considerable cleverness to trying to program intelligence into computers. In the process we've come up with word processors, databases, video games, the Internet, mobile phones, and convincing computer-animated dinosaurs. But intelligent machines still aren't anywhere in the picture. To succeed, we will need to crib heavily from
nature's engine of intelligence, the neocortex. We have to extract intelligence from within the brain. No other road will get us there. "
As someone with a strong background in Biology who took several AI classes at an Ivy League school, I found all of my CS professors had a disdain for anything to do with biology. The influence of these esteemed professors and the institution they perpetuate is what's been holding the field back. It's time people recognize it.
> As Searle showed with the Chinese Room, behavioral equivalence is not enough.
The Chinese Room experiment doesn't show only that. It also shows how important is the inter-relationship that exists between the component parts of a system.
We're reducing the Chinese Room to the Chinese and the objects they are using such as a lookup table. But what we're missing is the complex pattern between the answers, the structure and mutual integration that exists in their web of relations.
If we could reduce a system to its parts our brains would be just a bag of neurons, not a complex network. We'd get to the conclusion that brains can't possibly have consciousness on account that there is no "consciousness neuron" to be found in there. But consciousness emerges from the inter-relations of neurons and the Chinese Room can understand Chinese on account of its complex inner structure which models the complexity of the language itself.
I'll bite. Tell us, concretely, what is to be gained from a biological approach.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.
I'm short on time, so please forgive my rushed answer.
It will deliver on all of the failed promises of past AI techniques. Creative machines that actually understand language and the world around it. The "hard" AI problems of vision and commonsense reasoning will become "easy". You need to program a computer the logic that all people have hands or that eyes and noses are on faces. They will gain this experiences and they learn about our world, just like their biological equivalent, children.
Here's some more food for thought from Jeff Hawkins:
"John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be,
intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this:
Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters. Mind you, the directions say nothing about the meanings of the Chinese characters; they only deal with how the characters are to be copied, erased, reordered, transcribed, and so forth.
Someone outside the room slips a piece of paper through the slot. On it is written a story and questions about the story, all in Chinese. The man inside doesn't speak
or read a word of Chinese, but he picks up the paper and goes to work with the rulebook. He toils and toils, rotely following the instructions in the book. At times
the instructions tell him to write characters on scrap paper, and at other times to move and erase characters. Applying rule after rule, writing and erasing
characters, the man works until the book's instructions tell him he is done. When he is finished at last he has written a new page of characters, which unbeknownst
to him are the answers to the questions. The book tells him to pass his paper back through the slot. He does it, and wonders what this whole tedious exercise has
been about.
Outside, a Chinese speaker reads the page. The answers are all correct, she notes— even insightful. If she is asked whether those answers came from an intelligent mind that had understood the story, she will definitely say yes. But can
she be right? Who understood the story? It wasn't the fellow inside, certainly; he is ignorant of Chinese and has no idea what the story was about. It wasn't the book,
15 which is just, well, a book, sitting inertly on the writing desk amid piles of paper.
So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And
now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is
the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate
intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent. (Searle made it clear he didn't know what intelligence is;
he was only saying that whatever it is, computers don't have it.)
This argument created a huge row among philosophers and AI pundits. It spawned hundreds of articles, plus more than a little vitriol and bad blood. AI defenders
came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but
just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese
and when it doesn't. Its behavior doesn't tell us this.
A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from
my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did,
but my understanding occurred when I read the story, not just when I answer your questions. A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions. The Chinese Room, Deep Blue, and most computer programs don't have anything akin to this. They don't understand what they are doing. The only way we can judge whether a computer is intelligent is by its output, or behavior.
First, I don't feel this answers angersock's question concerning concrete applications of cognitive neuroscience to artificial intelligence.
Second, despite running into it time and again over the years, Searle's Chinese room argument still does not much impress me. It seems to me clear that the setup just hides the difficulty and complexity of understanding in the magical lookup table of the book. Since you've probably encountered this sort of response, as well as the analogy from the Chinese room back to the human brain itself, I'm curious what you find useful and compelling in Searle's argument.
I remain interested in biological approaches to cognition and the potential for insights from brain modelling, but I don't see how it's useful to disparage mathematical and statistical approaches, especially without concrete feats to back up the criticism.
Traditional AI has had 1/2 a century of failed promises. Jeff's numenta had a major shakeup over this very topic and has only been working with biological inspired AI for the past 3 years. Kurzwell also has just recently come around. Comparing Grok to Watson is like putting a yellow belt up against Bruce lee. Give it some time to catch up
In university I witnessed first hand the insitutional discrimination against biological neural nets. My ordinal point was that google could use the fresh blood and ideas.
You took the wrong lesson from the Chinese Room. behavioral equivalence is enough, and the Chinese Room shows that behavioral equivalence isn't possible to achieve through hypothetical trivial implementations like "a room full of books with all the Chinese-English translations"
" the use of statistical models that have no biological basis."
this is irrelevant
This is like saying a computer using an x86 processor is different, from the point of view of the user than an ARM computer, beyond differences in software
Or like saying DNA is needed for "data storage" in biological systems and not another technology
Sure, you can get inspiration from biology, but doesn't necessarily mean you have to copy it.
""I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, "
It's not really a fault of statistical translations (more likely quality of data issue), even though it has its limitations. Besides, google translation has been successful exactly because it's better than other existing methods (and Google has the resources, both in people and data to make it better)
I think that the google translator did pretty good on that fragment.
Garbage in, garbage out! If you use 'I' in a sentence fragment when you mean to use 'We' then you can't really blame the translator for getting it wrong.
'We need to meet up' is a sentence with a completely different meaning from the incorrect and semantically confusing 'I need to meet up', it really does sound as if you need to meet up to some expectation.
In further defense of Google, "I need to meet up with him" translates as 我需要与他见面.
If someone wants to attack Google's Chinese translation, it should be over snippets like 8十多万 or its failure to recognize many personal and place names which could easily be handled by a pre-processor. Google has never been competent in China in part because of their hiring decisions, but this isn't Franz Och's fault.
Obviously Google translate is not error free, nor is any statistical translation system going to be comparable to a human translator in the very near future, but you're underestimating the current development of statistical translation. Granted, I'm not a native speaker but I think "I need to meet up" is not even a sentence with proper grammar. Underlying model probably predicted something like meeting (satisfying) requirements due to the lack of an object in the sentence and context. Situations like this where the input is very short and noisy is obviously going to be a weakness of statistical systems for a long time to come, but looking at technologically how far we are from mastering biological systems, I think it's safe to say this is going to be the way of doing it for a while, and will be very successful in translating properly structured texts if proper context can be provided. Currently statistical translations have (almost) no awareness of context beside some phrase-based or hierarchical models. Many people are probably not factoring in the fact, that with exponentially more data, and exponentially higher computing power, a model can utilize the context of a whole book while translating just a sentence from that book - which is actually still much less than what human translators utilize in terms of context. While translating a sentence, I might even have to utilize what was on the news the night before to infer the correct context. We are currently definitely far from feeding this kind of information to our models, so I'd say this kind of criticism towards statistical translation is very unfair.
"We need to meet up" also translates incorrectly "我们需要满足". In fact, I did not originally use a fragment, I wrote a full sentence that Google repeatedly incorrectly translated. I only used a fragment here to simply my example.
To avoid the wrath of the Google fan boys, a better example would have been the pinnacle of statistical AI :
The category was "U.S. Cities" and the clue was: "Its largest airport is named for a World War II hero; its second largest for a World War II battle." The human competitors Ken Jennings and Brad Rutter both answered correctly with "Chicago" but IBM's supercomputer Watson said "Toronto."
Once again, Watson, a probability based system failed where real intelligence would not.
Google has done an amazing job, with their machine translation considering they cling to these outdated statistical methods. And just like with speech recognition has found out over the last 20 years, they will continue to get diminishing returns until they start borrowing from
nature's own engine of intelligence.
You are exhibiting a deep misunderstanding of human intelligence.
Ken Jennings thought that a woman of loose morals could be called a "hoe" (with an "e", which makes no sense!), when the correct answer was "rake". Is Ken Jennings therefor inhuman?
That's roughly correct, but IIRC Ray sold that company 30 years ago; it later went on to buy Nuance, and subsequently quite a few more speech-related companies before hooking up with Apple for Siri. So while your comment is correct, I'd be surprised if any of that initial technology was actually being used for Siri.
> Sure, he did some interesting work a long time ago, before he got weird.
You know, it would be wonderful if Ray Kurzweil actually works on software/hardware projects, and he's just hush-hush because he doesn't want to release experiments. Maybe he does more than writing books and speaking at conferences, and he secretly provisions ec2 clusters to experiment with Hadoop or whatever. Maybe he's not just some old geezer that pops lots of pills, maybe he's an old geezer that pops pills and writes Go.
At least, that's what I tell myself to not be as angry about his "prediction from a distance" branding.
On a somewhat related note, http://heybryan.org/fernhout/ has some old emails someone sent to Ray, exploring his lack of involvement in the open source transhumanist hardware/software community.
He's a "connector", a large company need people like him. Even if he was a sub-par engineer, and I bet he's not, he would still be a valuable hire, especially if they want to rebrand themselves as an "AI company".
Saw him give a talk promoting his latest book last month, was heavily disappointed. Ideas are presented in a way to fit nicely together, but ultimately lack any depth or critical insights. I recall someone calling it "creationism for people with an IQ over 140"; it's a fair description.
It's a shame, he's brought many great contributions to our field, but I fear he has jumped the shark a while ago. Maybe going to Google will force him to work on solutions to problems of which the correctness can be more easily assessed.
>I recall someone calling it "creationism for people with an IQ over 140"; it's a fair description.
Really? Because if so, then they stole that quote almost verbatim from Mitch Kapor when he was discussing the singularity in 2007. And it seems to have a lot less relevance to a book about how the brain works than it does to an imagined singularity.
>Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."
In fairness, that's how good quotes usually work: they tend to be retold time and again and adopted for other purposes until it's no longer clear who said it originally. So I wouldn't be too quick to call fowl on this one. I'm not sure one can "steal" a quote...
I'm fine with reusing quotes, but in this instance it seems like a rather ham-handed application of it.
The singularity reeks of religious concepts. Kurzweil even called his book "The Age of Spiritual Machines" before it was "The Singularity is Near." He literally thinks he's going to be able to live forever (and the technology to do so will be available within his own lifetime). Yada yada... basically what I'm saying is the the quote fits that book perfectly.
Now we're talking about his new book "How to Create a Mind", which is a theory about how the brain works and how reverse engineer it, and the quote doesn't seem to fit. I'm guessing someone was just trying to sound intelligent... but then why does the OP agree with them?
If by that you mean that scientists are attempting to achieve what religions have been falsely promising, then ok, but so what? Before we had medicine, people could only pray to try to heal the sick. Then physicians actually started studying the body and figuring out how to cure disease, fortunately not abandoning the idea because religions had failed to deliver.
He literally thinks he's going to be able to live forever (and the technology to do so will be available within his own lifetime).
An ambitious and unlikely goal, but it's not prohibited by the laws of physics (ignoring the heat death of the universe for the moment). I'll take that optimism over the much more common attitude that accepts the destruction of billions of sentient beings as inevitable and often even desirable.
I think it's pretty obvious, but let me quote Neal Stephenson:
>I can never get past the structural similarities between the singularity prediction and the apocalypse of St. John the Divine. This is not the place to parse it out, but the key thing they have in common is the idea of a rapture, in which some chosen humans will be taken up and made one with the infinite while others will be left behind.
Poll Americans (most of whom are Christian). Close to half will tell you the end of the world and thus the rapture is going to happen in their own lifetime. Christians have been believing that the rapture was around the corner for literally the last 2000 years. Arrogant if you ask me.
It wouldn't be so bad if Kurzweil's dates didn't line up conveniently with his own mortality. He'll be around 97 at the time he's predicting the singularity will occur.
So combine that with the concept of a) eternal life, b) meeting your relatives in heaven (Kurzweil is planning to resurrect his dead father), and c) AI and post humans that are essentially godlike. Sure, Kurzweil will show you a bunch of exponential graphs to make it all seem so reasonable, but that's why Kapor says "creationsim for the IQ 140 people."
That's not optimism. It's wishful thinking. If you can't see that it has all the fundamentals of a religion, I'm not sure what else to say.
>An ambitious and unlikely goal, but it's not prohibited by the laws of physics
An interesting set of questions to follow up this fair hypothesis are:
Is it better or worse than existing religions? In what way? Societally? Individually? Scientifically?
Is it better or worse than no religion? In what way? Societally? Individually? Scientifically?
Religion is arrogant, for sure. And it all started when some of our very distant ancestors decided to bury their dead instead of leaving them in a pile of trash. Which, interestingly, is considered one of the defining points where we became "humans" rather than just intelligent monkeys.
Good questions, but hard to answer of course. I'd say the singularity is better if it encourages people to go into science as a result and try to make it happen. Much like how science fiction sometimes inspires technology.
Is that happening? I don't know. I'm worried that people are so confident that the singularity is not only inevitable, but just around the corner, that they're simply buckling up for the ride.
I would say that it doesn't matter much; we only have to live with the consequences of the belief for a few decades.... but then again failed predictions don't often discourage those who believed.
> I'm worried that people are so confident that the singularity is not only inevitable, but just around the corner, that they're simply buckling up for the ride.
That's a valid concern. But the nice thing here is that people don't only have to wait for it (like I guess many do), but they can actually work to speed it up.
> It wouldn't be so bad if Kurzweil's dates didn't line up conveniently with his own mortality.
His age had probably little influence on his predictions. The Maes-Garreau law, as fun as it sounds, is probably not true[1]
> So combine that with the concept of a) eternal life, b) meeting your relatives in heaven (Kurzweil is planning to resurrect his dead father), and c) AI and post humans that are essentially godlike.
I'm personally skeptical of b), except for frozen people (cryonics). If entropy wiped out the information, there's no way to resurrect someone. a) and c) however seem to be merely obvious consequences of Friendly AI (I assume immortality and wicked IQ are good things). And intelligence explosion is quite plausible. You're a bit quick to dismiss those ideas just because they happen to pattern-match religion. (Now, I agree more with this[2] than with the specifics of Kurzweil ideas.)
> As far as I know, neither is God.
Current physics are reductionist. A supernatural God wouldn't fit into that. Current physics are deterministic (modulo the many-world lingering controversy), and universal. Miracles, as direct violations of the laws of physics, wouldn't fit into that. A Lord Outside the Matrix, maybe, but that's a different beast.
> That's not optimism. It's wishful thinking. If you can't see that it has all the fundamentals of a religion, I'm not sure what else to say.
Fundamentals of religion are irrelevant. The difference between singularity and rapture is that the first one - if we'll reach it - will be of human creation. Of our science and technology, not of supermagical powers. We can pretty much see the steps there, even though we haven't executed them all. Maybe it will take 200 years, not 50, but we will get there (unless we blow ourselves up before that).
Highlighting similarities between religion and singularity has no more merit than saying that addition is bad because Hitler and Stalin did it.
> Poll Americans (most of whom are Christian). Close to half will tell you the end of the world and thus the rapture is going to happen in their own lifetime. Christians have been believing that the rapture was around the corner for literally the last 2000 years. Arrogant if you ask me.
A bit of theological nitpicking: the notion of a pre-millenial 'Rapture' is a late development in Protestant theology that appears first in the 1600s and doesn't achieve mainstream popularity until the late 20th Century. Prior to that, Christian eschatalogical expectations were for the return of Christ to establish a just government once and for all.
Your larger point is still valid, but the application of the notion of a 'Rapture' is mostly anachronistic in any context other than the last hundred years.
If Kurzweil's technological miracle predictions come true, one of the side-benefits will be that we can start talking about them in phlegmatic, everyday language, and forget that there used to be religious undertones to it.
I think age of spiritual machines is actually the perfect title. The whole idea of the singularity is that you can't predict the nature of emergent phenomena based on lower level inputs, specifically with regards to the future of technology. This is basically analogous to the fact that we can't predict consciousness or intelligence by looking at the properties of matter or even biology.
I also don't see anything religious about the concept. Religion by (etymological) definition seeks to understand or connect with ultimate source of things, whereas the singularity is A) about the future and B) says that the future is going to be impossible to predict or understand because of accelerating change. At best you might be able to argue that it's vaguely teleological, but I'm not even sure that that is correct because the theory doesn't make any real predictions for what happens in the longterm after the singularity.
I think people pattern-match singularity to religion because it, essentially, promises the same things - long life / immortality, solution to many problems of humanity, superhuman beings / intelligences, etc. But this match is wrong; it doesn't matter if religions talk about the same things. It only means that those are human needs and desires. In case of singularity, there's a real chance that we could do all of this without need for supernatural powers, so it's worth discussing.
I tend to feel the same way about him and the state of his life at the moment.
I am very grateful for the inventions he brought forth and his work on AI but I think his current goals in life are unreasonable and of course, related to the death of his father.
As much as he doesn't want to be human anymore, his entire goal in life relies on the human condition...to reconnect with his father and transgress life in its current form.
I think he could do so much more at the moment if, like you said, he would focus on problems that can be solved as soon as possible and demonstrate a use of his solution.
"Ideas are presented in a way to fit nicely together, but ultimately lack any depth or critical insights."
While are a number of obvious problems with the theory, it's still an invaluable idea. Even if 2042 doesn't pan it, Kurzweil has still provided an enormously powerful tool to help understand the world around us. (Well, technically he didn't invent the idea, but he was the one who did most of the work aggregating the data.)
>It's a shame, he's brought many great contributions to our field
What field? The fluffsters? Saying stuff like, "Ideas are presented in a way to fit nicely together, but ultimately lack any depth or critical insights.", is saying nothing.
It's seemed pretty clear to me for some time that Google's real mission is AI/singularity oriented and everything else is just a step along that road. It may not be what the day-to-day view is in the trenches, but it seems like the high level plan.
A hire like this one certainly reinforces that perception.
I don't know if it's truly possible to accomplish, but it's fascinating to see a major company taking steps in that directions.
I've looked at Google this way since George Dyson wrote his "Turing's Cathedral" essay after he visited in 2005 [1].
The comments about book scanning led to some controversy at the time [2], which gave a glimpse into Google's AI motivations that have now become much more explicit, thanks to projects like Google Now, Google Glass, and self-driving cars.
"Organize the world's information and make it universally accessible and useful."
If you take that to the limit, the logical consequence is some sort of planet-wise consciousness that can instantly pull up any of humanity's collective knowledge at a moment's notice.
Thinking about it, I dont think so. Because if you ask google and also ask a folk that is an expert on something or loves the subject, the person will give you much better information and links.
Google has thousands of employees who all have a moderate amount of autonomy. I don't think they have a singular goal. They just do a bunch of stuff around organizing the world's data. Naturally AI/singularity oriented projects tend to emerge.
I'm somewhat surprised there are comments debating what use he could be to Google or what interest they might have in him - Google is one of the primary backers of Singularity University. They already have a working relationship. Now he's an employee. Don't get how this could be a stretch.
Singularity U as far as I understand is not really there so people can more quickly get to the point of uploading their brain to the cloud or anything - it's essentially for business strategists who want to have a better grasp of where things will be in 5-10+ years out. If the Goog believes strongly in the Kurz's ability to do this then it seems like a pretty nice score for the Goog.
> They already have a working relationship. Now he's an employee. Don't get how this could be a stretch.
Maybe because of his role at Google, "Director of Engineering". That's not a good description of what Singularity University offers their customers. They do maybe one or two field trips to BioCurious and call it quits.
Also, why is Singularity University managing TedxAustin? That was a bizarre email to see.
Really? Their stated goal is: "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges."
Why would this not in alignment w/ Google's aim for such a position? Why would they not want a strategist they believe who could direct their engineering staff in this manner?
The people who attend the university are CEOs, CTOs... Directors of Engineering, etc. It's not for fringe kooks to congregate in celebration of the upcoming nerd rapture. Not at $25k/10 weeks it ain't.
I get that he's a polarizing figure. But there are some very powerful people in this world who believes the man can walk on water.
You're all sorta right. I worked at Singularity University for 2 years. They do 2 things:
1. Educate fabulously wealthy people in expensive executive programs
2. Use that money to put on a YC-esk incubator during the program where people come to build companies that use the technology of their sponsors. Google was one of the first sponsors. Peter Norvig was on the Faculty for a couple years. So was Astro Teller who heads their special projects (Google X).
> Why would this not in alignment w/ Google's aim for such a position?
Singularity University helps you meet people who can help you facilitate engineering feats, in the sense that more money helps you facilitate engineering feats. That's okay in my book.
But, you're not going to end up with a working knowledge of mechanical engineering or computational neuroscience by hanging out at the Singularity University lectures.
People are surprised about at how far "Director of Engineering" can skew in that direction, they aren't surprised that Google is interested in the future.
> The people who attend the university are CEOs, CTOs... Directors of Engineering, etc. It's not for fringe kooks
A much larger percentage of CEOs, CTOs, D of E's depend on psychics and astrologers. So that helps define the strength of the "other powerful people follow him" defense.
I see what DRF means, and The Singularity is Near did seem mostly a perfunctory literature review, with important issues not discussed, just skimmed over. (For example, he doesn't discussed the causes of accelerating returns, doesn't support the causes with data, only the effects. Another example: is it necessarily true that we are intelligent enough to understand ourselves? We're effective when we can something decompose hierarchically into simpler concepts... but what if there isn't such a decomposition of intelligence? i.e. the simplest decomposition is too complex for us to grasp. Hofstadner asks if a giraffe is intelligent enough to understand itself.)
But I thought he supported his basic thesis, that progress is accelerating, compellingly. Really did a great job (seems to be the result of ongoing criticism, and him finding ways to refute it).
>For example, he doesn't discussed the causes of accelerating returns, doesn't support the causes with data, only the effects.
I agree with this. It seems to be a huge hole in the entire discussion. It's not enough to cite historical data, and assert that exponential growth will continue indefinitely. I could speculate a bit about some explanations. But I'm curious if there are any good discussions out there, does anyone have some recommendations?
I also found it annoying that in all his examples of exponential growth biological system he conveniently left out where the populations crash after reaching an environmental limit. I think it's just as likely that technology will send us back into the stone age with nukes or bio-weapons as it is we merge with AI.
Given Kurzweil's age and stated goals, I'm thinking there is no way he is going to Google unless they are investing in life extension / prevention of death.
Read between the lines - "next decade’s ‘unrealistic’ visions" - is likely nothing less than brain computer interfaces with the end goal of extending life by storing the entire human mind on a machine. Certainly not far off from Kurweil's timelines on Law of Accelerating Returns. I can understand why the PR does not say this, but it seems clear this is where Kurzweil would want to invest his time.
He's a visionary who can deliver a finished product. I think he must have some pretty specific ideas, and he wants to partner with Google.
A few guesses:
- New interfaces to replace keyboard/mouse/touch. Voice, gesture, face, brainwaves. Sign language with humming, blinking, and pupil pointing. Works with tablets, TVs, wearables, cars, buildings, ATMs, etc.
- SuperPets (r) that can pass the Turing test. And do the shopping.
- Surgically implanted Bluetooth. (It could literally be a tooth!)
- Hover skateboards.
- The Matrix. (Or the 13th Floor, which was a better movie in my not-so humble opinion.)
I don't think it'll have to do with life-extension though. That's just too crazy far out-there.
> - New interfaces to replace keyboard/mouse/touch. Voice, gesture, face, brainwaves.
Unfortunately, it turns out you can only get a limited number of bits out by looking at brainwaves (EEG). Gesture is much higher bandwidth, and keyboards seem to be the highest.
fMRI is extremely high bandwidth and still in its infancy, and we are currently getting pretty good performance from relatively basic invasive neural implants on the disabled. I've read a couple of interesting articles on breakthroughs regarding the miniaturization of fMRI technology, so I think it's safe to say that keyboards will not be the highest bandwidth interface in the decades to come.
I agree that there's useful information in the brain that we can extract. EEG isn't that method. I love fMRI as much as the next guy. fMRI isn't reading "brainwaves". It images a correlate of neuronal oxygen depletion which indicates metabolism and activity.
Well, almost. Mobile interfaces certainly aren't helping. I am not sure how much I would like a vocal way of typing out code, but I suspect I wouldn't.
The problem is, and I don't want to be mean about it, is that Kurzweil is a crackpot and charlatan. This is not to take away from his intelligence or his technical achievements, which are indisputable. However, even Nobel prize winners can be outright crackpots and crazies (Nobel disease).
I don't know exactly what Google's motives are here, I suspect it's something less than actually bringing about some of his, let's say, loftier ideas.
For a long time it was the highest grade you could be hired in as since Google didn't feel like the title "Vice President" in Google meant the same thing as other places. I know VP's they gave offers to, who turned down the offer on the basis of having to take the title of director. At one time you had a limited amount of time post hire to 'prove yourself' or be managed out of the organization.
I found the hire curious from the standpoint that Kurzweil's tendency to handwave rather than retreat to data has historically been a red flag in the hiring process at Google. This tended to unfairly penalize theorists over experimentalists at Google. One wonders if they've changed.
I remember him giving a tech talk and talking about how many computers you'd need to simulate a brain and how nobody would put that together for years yet, and chuckling knowingly :-).
So he gave a tech-talk at Google, around 2008, and yes he had lots of graphs and such but during the Q&A session he kept retreating into generalized ideas rather than data. I recall the question about how he came up with his numbers for machines to hold a consciousness as one such exchange.
The impression I certainly got was that his approach is to theorize about something, then design experiments to test out his theory. As opposed to running a bunch of experiments and then figuring out a theory that would explain the collected data.
That said, I've got mad respect for his work and have enjoyed his talks and writings. Your comment though suggests you think 'theorist' is a negative in some connotation? Why is that?
> The impression I certainly got was that his approach is to theorize about something, then design experiments to test out his theory. As opposed to running a bunch of experiments and then figuring out a theory that would explain the collected data.
There's nothing wrong with this, as you've written it. (There might be a problem with his implementation.) All else being equal, I trust a theory which has made ten accurate predictions over a theory which merely explains ten previous observations.
If one person develops a theory and makes ten predictions which turn out to be true; and if a second person observes the same ten things, and then develops a theory without knowing the first; then I consider this stronger evidence in favor of the first theory than of the second. (The second might e.g. be more elegant, in which case I might prefer it anyway.)
This is true whatever the observations are. If they're unsurprising, then we already had a good theory, in which case I question the need for the two new ones, but that applies to both equally.
It may be that Kurzweil is falling into the trap of misinterpreting his results to fit his theory, but that can be done just as well when you try to base a theory off existing data. On the other hand, the Texas sharpshooter fallacy can only happen if you collect data before coming up with your theory.
I don't care for Kurzweil but I don't see anything wrong with hypothesizing about something and then designing experiments to test that hypothesis, rather than just being a mindless "data scientist." :) Sounds pretty normal to me.
> His tendency to handwave rather than retreat to data? What the heck are you talking about? Have you seen how many graphs he puts in his presentations?
The ability to present something in graph form does not mean that it isn't handwavy bafflegab rather than data.
One need look no further than his absurd joke of a "paradigm shifts"/"countdown to the singularity" graph.
Signaling is important in the maintenance and direction of culture. Whoever made this decision is making a statement about what sorts of projects they want google to work on in the future.
Anyone whose thinking is wishful can safely be disregarded, of course.
But this utopian resurrection and transcendance story is just one version of the singularity. There are many people who think AI is not physically impossible, nanotech is not physically impossible, and so recursively self-improving AI with strong abilities to act in the physical world is a possibility. Many of those people think that is a very dangerous possibility.
You can agree or disagree with the detailed arguments, but you cannot accuse these people of allowing wishful thinking to cloud their judgements.
I like MacLeod as a writer, but that slogan is damaging because many people hear it, laugh, and stop thinking.
There are many forms of wishful thinking. Apocalyptic disaster is also one.
You seem nice; forgive me if I get too dismissive here.
As a software practitioner it seems to me obvious that we are so many light years away from the kind of software the Singularity people are talking about that the whole thing is all a fantasy club, and a little embarrassing. It's like the detailed debates 19th century radicals used to have about society after the Revolution.
Also, the Singularity people always seem to do that moving target thing where as soon as you say one thing, they go: But that's not the Singularity, that's a misunderstanding of the Singularity. Leaves me thinking: it must be awfully subtle.
A software practitioner, you say? As a researcher in machine learning, I think you're wrong. I agree that recursively self-improving AI is not around the corner, but I think it could happen in a few decades.
Even if it has a 1% chance of happening in the next century, I would still allocate resources to thinking about how to mitigate the risks (and maybe look foolish if it never happens) than leave things to chance and end up regretting it. By which I mean, a runaway AI destroys humanity or other bad outcomes.
As for the moving target -- since a singularity, if it happens, would be very important, it definitely makes sense to talk about lots of different scenarios. Anyone who dismisses a plausible scenario just because that's not what Vernor Vinge said, or what Ray Kurzweil said -- well, they can safely be dismissed.
The singularity (as conceived pre-Kurzweil) is the horizon beyond which we can't make any reasonable predictions about the future. Recursively self-improving AI is merely one potential path to that. But GP is right that the meme that the singularity is ridiculous tends to go along with the meme that intelligence is ineffable and can't be reduced to an algorithm in a computer.
Seems sad, I'd like to see Kurzweil form another startup and get bought by Google, rather than go work for them. I assume he could self-fund something, I don't how his hedge funds are doing.
But maybe he's been there and done that, and wants mucho resources from day one. Maybe the AI space has grown up and it's hard to start up companies now, you need the resources and big data sets to do anything significant? Or he's just after the free lunches.
Maybe Google has a monopoly on all the smartest people anymore, or, is just one-stop shopping for support for the biggest ideas. Ray's worth 27 million and as you said has many of his own companies, he doesn't need a salary.
In 2008, Ray Kurzweil said in an expert panel in the National Academy of Engineering that solar power will scale up to produce all the energy needs of Earth's people in 20 years.
Wow, Google's stock should rise on this news. Many folks may not know Kurzweil keyboards (for music), but they are excellent. I can't wait to see where he leads us next.
I wonder how the blind allocation process will treat him. His domain expertise is AI, but he didn't do any of that At Google, which means it doesn't exist. So is he going to have to spend 18 months maintaining a legacy ad-targeting product while the 26-year-old Staff SWE next to him works on its replacement? How is he going to handle that?
Why would you ever think that he is being blindly allocated to the position of Director of Engineering? In that position, he'll pretty much be in control of his own destiny at Google.
When I was there, Directors were still above the Real Googler Line, but it may have moved up in the time since then. Also, he lives in Massachusetts, which means he'll only get the work that MTV doesn't want (unless he moves). Finally, with his visibility, he's going to get a lot of prank Perf and his manager is going to have a hard time promoting him because of that.
Anyone can write unsolicited reviews for anyone, with an option of it being visible only to the manager (aka graffiti in the executive washroom).
If someone gets pissed off about his transhumanism (especially if he starts talking about the Singularity on eng-misc, or if he has questions related to assigning Real Names to AIs, or if someone just doesn't like Canada and won't forgive him for his work with Our Lady Peace in 2000-1) and decides to "Perf" him, he could be in trouble.
If someone like Ray Kurzweil ends up on a PIP I will call the fabric of the universe broken.
Google is badly managed but they're not going to subject a heavyweight like that to their typical nonsense (blind allocation, manager-as-SPOF) and if they do, I'm sure he'll be just fine.
Joining Google gives him ready access to data sets of almost unimaginable size, as well as unparalleled infrastructure and skills for handling such large data sets, putting him in an ideal position to connect researchers in academic and corporate settings with the data, infrastructure, and data management skills they need to make their visions a reality.
According to the MIT Technology Review[1], he will be working with Peter Norvig, who is not just Google's Director of Research, but a well-known figure in AI.
--
[1] http://www.technologyreview.com/view/508896/what-google-sees...