Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks in part to the popularity of his books, movie, and speeches, Kurzweil now knows pretty much every AI researcher in the planet, and we can safely assume he's aware of even very obscure research projects in the field, both inside and outside academia.

Joining Google gives him ready access to data sets of almost unimaginable size, as well as unparalleled infrastructure and skills for handling such large data sets, putting him in an ideal position to connect researchers in academic and corporate settings with the data, infrastructure, and data management skills they need to make their visions a reality.

According to the MIT Technology Review[1], he will be working with Peter Norvig, who is not just Google's Director of Research, but a well-known figure in AI.

--

[1] http://www.technologyreview.com/view/508896/what-google-sees...




I just can't see Kurzweil being in the same league as Peter Norvig. Sure, he did some interesting work a long time ago, before he got weird. I can't see this working out well for Google, unless they just want a famous figurehead.


I'm reposting this comment I made a couple of months ago. He's no John McCarthy, but he was a true pioneer in the commercial applications of AI:

==================================

I don't think that's a very fair assessment of Kurzweil's role in technology.

He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.

As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.

"Oh wow, computers can read printed documents and recognize the characters!"

"Oh wow, computers can read written text aloud!"

"Oh wow, computers can recognize speech!"

"Oh wow, computer synthesizers can sound just like pianos now!"

I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.


He's also an ACM Fellow, from it's first class - along with people like Knuth, Cerf, Rivest, Codd, etc.


In addition, I'd rank Minsky, Larry Page, Bill Gates, Dean Kamen, Rafael Reif, Tomaso Poggio, Dileep George, and Kurzweil's other supporters as much more qualified to judge the merits of his ideas, than Kurzweil's detractors like Hofstadter, Kevin Kelly, Mitch Kapor, and Gary Marcus. It seems that Hofstadter is the only one of that group who is really qualified to render a verdict.

http://howtocreateamind.com/


To put it another way - if a visionary isn't controversial, she's probably not a visionary.


I believe your summary suffers from selection bias.

I think most people have views that are controversial. Only when one is famous do others hear about those controversial views. Furthermore, just about every famous project has its distractors, leading to controversy.

Take Stephen Hawking as an example. He doesn't believe there was a god who created the universe. That's a controversial view to many. But when an non-famous atheist says exactly the same thing, few take notice, so you don't hear about those people.

Take Alan Kay as another example. He's one of the key people behind OLPC, which given its criticism could be considered controversial. But name any big project which has neither criticism nor controversy.

Ole Kirk Christiansen founded Lego. He was visionary in that he saw a future in plastic bricks as toys for children. What were his controversial views? I have no clue. But he probably had some. Perhaps his motto "Only the Best is Good Enough" is controversial to someone who believes that that second best may be good enough for some cases.


Controversial views are a necessary but not sufficient condition for visionary status.


While I believe that everyone has controversial views. Do you think god exists? Do you think there should be more gun ownership? Or less? Do you think abortion should be banned? Only available for a few cases. Up to the mother to decide? But only until the third trimester?

Should there be public drinking of beer? Public nudity? Public urination? Public displays of affection?

Should women always have their heads covered while in public? What about men? Should we ban male circumcision until the male is old enough to make the decision for himself? Should we have the draft? What about mandatory civil service?

Do you believe in mandatory bussing? Separate but equal? Co-ed schools or sex segregated schools? State income tax or not? Legalized gambling? What if it's only controlled by the state? Should alcohol sales only be done by the state, or can any place sell vodka? What about beer? Should alcohol sales be prohibited within a certain distance of schools? Are exceptions allowed?

Do vaccines cause autism? Was the Earth created less than 10,000 years ago? Can you petition the Lord with prayer? Is the Pope God's representative on Earth, or the anti-Christ? Should non-believers be taxed at a higher rate than believers?

Should I go on? All of these are controversial. If you have views one way or the other, then your views will be controversial at least to some, if not to most. And if you have no views on a topic then that itself can be controversial. As the morbid joke goes, during the height of the Troubles in Ireland: "yes, but are you a Catholic Jew or a Protestant Jew"?

If everyone has controversial views, then of course they are precondition for being a visionary. They are also a precondition for not being a visionary. Name one famous (so I have a chance of knowing something about that person) non-visionary who did not have controversial views.

But first, name a controversial view of the founder of Lego ... who is definitely described as a visionary.


The degree of controversy is obviously diminished when someone has been successful. But I'll take a stab at the last question...

As for controversy with LEGO's founder: 1. Structuring the company around "doing good" instead of profitability and other more "corporate values". Google gets flack for this to this day, and LEGO almost went broke following this tenet until they revamped the corporate structure to follow profitability instead. 2. Use of plastics instead of wood, deviating from the company's original product base. Surely, that's what paved the way for LEGO, but I'm sure it was somewhat controversial switch in some circles, not least of which were carpenters and some employees. 3. LEGO's many legal battles and use of patents might be construed as controversial in some circles.

More to the point of the OP, it's hard to be a visionary if your view does not in any way shape or form deviate from the norm. Deviation from the norm is what sets the visionary apart, hence it's some times said that visionaries are controversial, because this deviation from the norm more often than not causes controversy in the areas in which they are deviating.


"Doing good" is a standard crafter/engineer approach, so it's not like that alone is visionary. My Dad's phrase "do your best or don't do it at all." Was he a visionary?

He wanted the whole world to be Christian, and went to Ecuador to work as a missionary. Did that make him and my Mom (and my dad's parents (also missionaries) and various other of my extended family) visionaries? What about all of the Mormons who do their two years of missionary work?

Consider also all the people who were visionary, tried something, and failed. In part, perhaps, because their vision wasn't tenable. You don't hear about all of those visionary chefs who had a new idea for a restaurant, only to find out that it wasn't profitable.

Add all those up, and there are a lot of visionaries in the world. Enough that the non-visionaries are the exception.


Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.

I'm sure he is a smart guy, but I think we have put him on a pedestal when he probably is not as remarkable as we want him to be.


I don't mean to pick on you (and I certainly didn't downvote you), but you seem like a posterboy for just how easy it is to take inventions and innovation for granted after the fact.

I find it instructive to occassionally go to Youtube and load up commericals for Windows 95, 3.1, the first Mac, etc., or even to dust off and boot up an old computer I haven't touched for decades. Not to get too pretentious, but it's a bit like Proust writing about memories of his childhood coming flooding back to him just from the smell of a cake he ate as a child.

When you really make a concerted effort to remember just how primitive previous generations of computing were, I think it puts Kurzweil's predictions and accomplishments in a much more impressive context.

This was the state of the art PC back when Ray was forming his first companies: https://www.youtube.com/watch?v=vAhp_LzvSWk

I posted some other thoughts about Ray's track record awhile back:

==========================================

I read his predictions for 2009 only a couple of years before they were supposed to come about (which he wrote in the late 90s), and many seemed kind of far fetched - and then all of a sudden the iPhone, iPad, Google self-driving car, Siri, Google Glass, and Watson come out, and he's pretty much batting a thousand.

Some of those predictions were a year or two late, in 2010 or 2011, but do a couple of years really matter in the grand scheme of things?

Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then.

Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.

Even if a few of his more outlandish predictions like immortality are a few decades - or even generations - off, I think the road map of technological progress he outlines seems pretty inevitable, yet still awe inspiring.


"Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary"

There have been predictions of self-driving cars for more than half a century. It's in Disney's "Magic Highway" from 1958, for example. There was an episode of Nova from the 1980s showing CMU's work in making a self-driving van.

Researching now, Wikipedia claims: "In 1995, Dickmanns´ re-engineered autonomous S-Class Mercedes-Benz took a 1600 km trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 175 km/h on the German Autobahn, with a mean time between human interventions of 9 km, or 95% autonomous driving. Again it drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without human intervention."

You'll note that 1995 is before "the late 90s." It's not much of a jump to think that a working research system of 1995 could be turned into something production ready within 20 years. And you say "a year or two late", but how have you decided that something passes the test?

For example, Google Glass is the continuation of decades of research in augmented reality displays going back to the 1960s. I read about some of the research in the 1993 Communications of the ACM "Special issue on computer augmented environments."

Gibson said "The future is already here — it's just not very evenly distributed." I look at your statement of batting a thousand and can't help but wonder if that's because Kurzweil was batting a thousand when the books was written. It's no special trick to say that neat research projects of now will be commercial products in a decade or two.

Here's the list of 15 predictions for 2009 from "The Age of Spiritual Machines (1999)", copied from Wikipedia and with my commentary:

* Most books will be read on screens rather than paper -- still hasn't happened. In terms of published books, a Sept. 2012 article says "The overall growth of 89.1 per cent in digital sales went from from £77m to £145m, while physical book sales fell from £985m to £982m - and 3.8 per cent by volume from £260m to £251m." I'm using sales as a proxy for reads, and while e-books are generally cheaper than physical ones, there's a huge number of physical used books, and library books, which aren't on this list.

* Most text will be created using speech recognition technology. -- way entirely wrong (there's goes your 'batting 1000')

* Intelligent roads and driverless cars will be in use, mostly on highways. -- See above. This is little more common now than it was when the prediction was made.

* People use personal computers the size of rings, pins, credit cards and books. -- The "ring" must surely be an allusion to the JavaRing, which Jakob Nielsen had, and talked about, in 1998, so in that respect, these already existed when Kurzweil made the prediction. Tandy sold pocket computers during the 1980s. These were calculator-sized portable computers smaller than a book, and they even ran BASIC. So this prediction was true when it was made.

* Personal worn computers provide monitoring of body functions, automated identity and directions for navigation. -- Again, this was true when it was made. The JavaRing would do automated identity. The Benefon Esc! was the first "mobile phone and GPS navigator integrated in one product", and it came out in late 1999.

* Cables are disappearing. Computer peripheries use wireless communication. -- I'm mixed about this. I look around and see several USB cables and power chargers. Few wire their house for ethernet these days, but some do for gigabit. Wi-fi is a great thing, but the term Wi-Fi was "first used commercially in August 1999", so it's not like it was an amazing prediction. There are bluetooth mice and other peripherals, but there was also infra-red versions of the same a decade previous.

* People can talk to their computer to give commands. -- You mention Siri, but Macs have had built-in speech control since the 1990s, with PlainTalk. Looking now, it was first added in 1993, and is on every OS X installation. So this capability already existed when the prediction was made. That's to say nothing of assistive technologies like Dragon which did supported text commands in the 1990s.

* Computer displays built into eyeglasses for augmented reality are used. -- "are used" is such a wishy-washy term. Steve Mann has been using wearable computers (the EyeTap) since at least 1981. Originally it was quite large. By the late 1990s it was eyeglasses and a small device on the belt. It's no surprise that in 10 years there would be at least one person - Steve Mann - using a system where the computer was built into the eyeglasses. Which he does. A better prediction would have been "are used by over 100,000 people."

* Computers can recognize their owner's face from a picture or video. -- What's this supposed to mean? There was computer facial recognition already when the prediction was made.

* Three-dimensional chips are commonly used. -- No. Well, perhaps, depending on your definition of "3D." Says Wikipedia, "The semiconductor industry is pursuing this promising technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid."

* Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space. -- No.

* A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. PS3s came out in 2006 with a peak performance estimated at 2 teraflops, giving the hardware industry several years of buffer to achieve Kurzweil's goal.

* There is increasing interest in massively parallel neural nets, genetic algorithms and other forms of "chaotic" or complexity theory computing. -- Meh? The late 1990s, early 2000s were a hey-day for that field. Now it's quieted down. I know 'complexity'-based companies in town that went bust after the dot-com collapse cut out their funding.

* Research has been initiated on reverse engineering the brain through both destructive and non-invasive scans. -- Was already being done long before then, so I don't know what "initiated" means.

* Autonomous nanoengineered machines have been demonstrated and include their own computational controls. -- Ah-ha-ha-ha! Yes, Drexler's dream of a nanotech world. Hasn't happened. Still a long way from happening.

So several of these outright did not happen. Many of the rest were already true when they were made, so weren't really predictions. How do you draw the conclusion that these are impressive for their insight into what the future would bring?



First of all, I would strongly encourage anyone who is interested to check out Kurzweil's 2009 predictions in his 1999 book Age of Spiritual Machines, rather than this Wikipedia synopsis. It puts his predictions in a much more accurate context. You can view much of it here: http://books.google.com/books?id=ldAGcyh0bkUC&pg=PA789&#...

Kurzweil also does a reasonably unbiased job of grading his own predictions here: http://www.kurzweilai.net/images/How-My-Predictions-Are-Fari...

Quite a few of your statements relate to technological adoption vs. technological capability, such as everyday use of speech recognition and ebooks. I clearly stated that Kurzweil is not perfect at predicting what technologies will catch on with consumers and organizations, nor is anyone for that matter. To me, and to most of the people reading this, the most interesting aspect of Kurzweil's predictions is always what technological capabilities will be possible, rather than the rate of technological adoption.

Some of your other statements conflate science fiction with what Kurzweil does: "There have been predictions of self-driving cars for more than half a century. It's in Disney's 'Magic Highway' from 1958, for example." Similarly, most of your other points attempt to make the case that because nascent research projects existed, all of his predictions should have been readily apparent. I'm sorry, but this is pretty much the same hindsight bias displayed by gavanwoolery and Kurzweil's worst critics. Basically, Kurzweil's predictions are incredulously absurd to you, until they become blindingly obvious.

You can point to obscure German R&D projects all you want (and who knows how advanced that prototype was, or how controlled the tests were), but I was blown away by the Google self driving car, as were most of the people on HN based on the enthusiasm it received here. I thought it would take at least a decade or so before people took it for granted, buy you've set a Wow-to-Meh record in under a year.

Once again, I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer. Zip drives, massive boot times, 5 1/4 floppy disks, EGA, 20mb external hard drives the size of a shoe box, 30 minute downloads for a single mp3 file, $2,000 brick phones, jpgs loading up one line of pixels a second, etc.

To be clear, I'm not a Kurzweil fanboy. He's not some omniscient oracle, bringing down the future on stone tablets from the mount. What he is is a meticulous, thoughtful, and voracious researcher of technological R&D and trends, and a reasonably competent communicator of his findings. I'm very familiar with the track records of others who try and pull off a similar feat, and he's not perfect, but he's far and away the best barometer out there for the macro trends of the tech industry. If his findings were so obvious, why is everyone else so miserable at it? Furthermore, his 1999 book was greeted with the same skepticism and incredulity that all of his later books were.

For some of your other points, I've included links below:

Research has been initiated on reverse engineering the brain - Kurzweil was clearly talking about an undertaking like the Blue Brain project. Henry Markram, the head of the project, is predicting that around 2020 they will have reverse engineered and simulated the human brain down to the molecular level:

http://www.ted.com/talks/henry_markram_supercomputing_the_br...

http://en.wikipedia.org/wiki/Blue_Brain_Project

A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. Pretty much all of Kurzweil's predictions boil down to Moore's law, which he would be the first to admit. I'm not sure what you're trying to say.

Autonomous nanoengineered machines have been demonstrated and include their own computational controls. - If you read his prediction in context, he's clearly talking about very primitive and experimental efforts in the lab, which we are certainly closing in on:

http://www.kurzweilai.net/automated-drug-design-using-synthe...

http://en.wikipedia.org/wiki/Nadrian_Seeman - If you've been following any of Nadrian Seeman's work on nanobots constructed with DNA, Kurzweil's predictions seem pretty close

http://wyss.harvard.edu/viewpressrelease/101/researchers-cre...

http://www.kurzweilai.net/a-step-toward-creating-a-bio-robot...

http://www.aalto.fi/en/current//news/view/2012-10-18/

Three-dimensional chips are commonly used. - I guess you could quibble over them being a few years late:

http://www.bbc.co.uk/news/technology-17785464

http://www.pcmag.com/article2/0,2817,2384897,00.asp


Okay, I looked at some of the "reasonably unbiased job of grading his own predictions." I'll pick one, for lack of interest in expanding upon everything.

He writes: “Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry.” When I wrote this prediction in the 1990s, portable computers were large heavy devices carried under your arm.

But I gave two specific counter-examples. The JavaRing from 1998 is a personal computer in a ring, and the TRS-80 pocket computer is a book-sized personal computer from the 1980s, which included BASIC. So the first part, "are available", was true already in 1999. Because "are available" can mean anything from a handful to being in everyone's hand.

Kurzweil then redefines or expands what "personal computer" means, so that modern iPods and smart phones are included. Except that with that widened definition, the cell phone and the beeper are two personal computers which many already had in 1999, yet were not "large heavy devices carried under your arm", and which some used as fashion statements. I considered then rejected the argument that a cell phone which isn't a smart phone doesn't count as a personal computer, because he says that computers in hearing aids and health monitors woven into undergarments are also personal computers, so I see no logic for excluding 1990s-era non-smart phones which are more powerful and capable than a modern hearing aid.

There were something like 750 million cell phone subscribers in the world in 2000, each corresponding to a "personal computer" by this expanded definition of personal computer. By this expanded definition, the 100 million Nintendo Game Boys sold in the 1990s are also personal computers, and the Tamagotchi and other virtual pets of the same era are not only personal computers, but also used as jewelry similar to how some might use an iPod now.

He can't have it both ways. Either a cell phone (and Game Boy and Tamagotchi) from 1999 is a personal computer or a hearing aid from now is not. And if the cell phone, Game Boy, etc. count as personal computers, then they were already "common" by 1999.

Of course, what does "common" mean? In the 1999 Python conference presentation which included the phrase "batteries included", http://www.cl.cam.ac.uk/~fms27/ipc7/ipc7-slides.pdf , the presenter points out that a "regular human being" carries a cell phone "spontaneously." I bought by own cell phone by 1999, and I was about the middle of the pack. That's common. (Compare that to the Wikipedia article on "Three-dimensional integrated circuit" which comments "it is not yet widely used." Just what does 'common' mean?)

Ha-ha! And those slides show that I had forgotten about the whole computer "smartwatch" field, including a programmable Z-80 computer in the 1980s and a smartwatch/cellphone "watch phone" by 1999!

I therefore conclude, without a doubt, that the reason why the prediction that "Personal computers are available in a wide range of sizes and shapes, ... " was true by 2009 was because it was already true in 1999.

As regards the "obscure German R&D project", that's not my point. A Nova watching geek of the 1980s would have seen the episode about the autonomous car project at CMU. And Kurzweil himself says that the prediction was wrong because he predicted 10 years and when he should have said 20 years. But my comment was responding to the enthusiasm of rpm4321 who wrote "Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then."

I don't understand that enthusiasm when 1) the prediction is acknowledged as being wrong, 2) autonomous cars already existed using the 'primitive tech environment' of the 1990s, and 3) the general prediction that it would happen, and was more than science fiction, was widely accepted, at least among those who followed the popular science press.

"I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer"

I started working with computers in 1983. I have rather vivid memories still of using 1200 baud modems, TV screens as monitors, and cassette tapes for data storage. I even wrote code using printer-based teletypes and not glass ttys. My complaint here is that comments like "primitive" denigrate the excellent research which was already done by 1999, and the effusive admiration for the predictions of 1999 diminish the extent to which those predictions were already true when they were made.


Man, those are actually some pretty tame predictions, especially if most of them were already in some form of production. I guess the future ain't all it's cracked up to be.


>Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.

This is a problem common to other AI pioneers, including Norvig.


TLDR, he exhibits hindsight bias


That's the DR bit.

The TL-Did-actually-read-and-am-summarising-this-for-people-who-won't is exactly the opposite of what you've just stated.

He exhibits foresight bias.


I'm not sure whether you two are referring to the same 'he'.


Text to speech is easy, right? You just get a bunch of white noise and squirt it out a speaker with a bit of envelope shaping. Short rapid burst is a 'tuh'. Or maybe a 'kuh'. Or a 'puh' or 'duh' or 'buh'. More gentle with a bit of sustain is a 'luh' or 'muh' sound. I had a speech synth on a CP/M computer that did this. You might understand what was being said, if you knew what was being said.

People had a lists of phonemes and improved those.

Then people experimented with different waveforms.

Here's a collection of different voices. (Poor quality sound, unfortunately.) (http://www.youtube.com/watch?v=aFQOYBNAMHg)

Why did all those people take so long to make the jump to biphones, to smoothing out the joins between individual phonemes?

You had the Japanese with their '5th generation' research who were physically modelling the human mouth, tongue, and larynx, and blowing air through it. (You don't hear much about the Japanese 5th generation stuff nowadays. I'd be interested if there's a list of things that come from that research anywhere.)

Saying "talking computers" is easy; doing it is tricky.

EDIT: (http://www.japan-101.com/business/fifth_generation_computer....)

> By any measure the project was an abject failure. At the end of the ten year period they had burned through over 50 billion yen and the program was terminated without having met its goals. The workstations had no appeal in a market where single-CPU systems could outrun them, the software systems never worked, and the entire concept was then made obsolete by the internet.


This has been posted before and it goes a long way into explaining why your reaction may not be the more appropriate:

http://lesswrong.com/lw/im/hindsight_devalues_science/


I'm probably preaching to the wrong crowd here, but I'm not talking in hindsight-bias. I mean, even in the early days others were making similar predictions - not all of them were as vocal though. Also, I'm pretty sure he did not single-handedly invent all the listed things before anyone else had even thought of it -- as is the case with any invention, you probably have a few thousand people thinking about the idea or researching it before one person steps forward with a good implementation -- and I'm sure Kurzweil found inspiration in his colleagues work, and there were probably earlier implementations of his ideas.

This does not mean he was not smart, I am simply making a general truth: there are few "original" inventions, and many "obvious" inventions. If you do not think these things were obvious, how long do you think it would take for the next implementation to appear? I would bet 1-3 years at most. No single human is that extraordinary -- some just work harder than others at becoming visible.


>Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.

What does "predicting" a thing has with actually IMPLEMENTING it? Here, I predict "1000 days runtime per charge laptop batteries". Should I get a patent for this "prediction"?

No, text-to-speech, speech recognition and synthesis are not "fairly obvious applications of computer science that anyone could have pioneered". And even if it was so, to be involved in the pioneering of ALL three takes some kind of genius.

Not only that, but all three fields are quite open today, and far from complete. Speech recognition in particular is extremely limited even today.

Plus, you'd be surprised how many "anyones" scientists failed to pioneer such (or even more) "obvious applications". Heck, the Incas didn't even have wheels.

(That said, I don't consider Kurzweill's current ideas re: Singularity and "immortality" impressive. He sounds more like the archetypical rich guy (from the Pharaohs to Howard Hughes) trying to cheat death (which is a valid pursuit, I guess) than a scientist).


Here's Norvig himself on Kurzweil:

“Ray’s contributions to science and technology, through research in character and speech recognition and machine learning, have led to technological achievements that have had an enormous impact on society – such as the Kurzweil Reading Machine, used by Stevie Wonder and others to have print read aloud. We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we’re working on at Google.”


I'm more reluctant to trash Kurzweil, but this image hiring policy is taking on the look of some sort of bizarre Victorian menagerie where they keep old famous computer scientists in wrought iron cages for Googlers' amusement. It's like the Henry Ford museum, but they're collecting people. That's more than a little weird.


Much as Microsoft did in the 1990s.

And AT&T in the 1960s.


It could be argued that such things happen in academics too.

But this is not the same, famed people established themselves at Microsoft and AT&T. There wasn't this sort of hiring of celebrity. Someone of Kurzweil's stature could be doing his own research and simply hired on as a board member.

There are plenty of widely known people at Google, but seemingly in spite of Google rather than because of. Maybe that's a consequence of 20% time too. But if people's reputations are staked in things other than the company, the company seems to borrow more reputation than it makes.


>Sure, he did some interesting work a long time ago, before he got weird.

He was weird then too. That's why he did such interesting work. His work, combined with a lack of fame at that time, just kept the weird from showing through.

I suspect that genius is made up almost, but not quite, entirely of crazy.


Rather, genius is the subset of crazy recognized by society as useful. The moment of recognition is the point at which it becomes venerated instead of derided.


A lot of visionaries are misunderstood because they can "see" things the others can't at the time, and it doesn't make sense to them, which is why they think they are "weird" or "crazy". From that point of view, Richard Stallman was also a visionary, even if it took 2-3 decades before his visions of states or companies putting backdoors in your proprietary software became true.


Manager to programmers: "Hey that weird free software guy thinks we put "backdoors" in our software, whatever that means. Maybe he's right and our competitor do this already!! ...Quick, code a backdoor into our system too, we don't want to fall behind!"


Yes, that's probably exactly what the Skype managers said. "Let's do it first before our Chinese competition does it".


> I just can't see Kurzweil being in the same league as Peter Norvig.

The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.

Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.

Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.

[1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-f...


For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings, often with feathers included. It was only when they figured out that wings don't have to flap and don't need feathers that they actually got off the ground.

It's still flight, even if it's not done like a bird. Just because nature does it one way doesn't mean it's the only way.

(On a side note, multilayer perceptrons aren't all that different from how neurons work - hence the term "artificial neural network". But they also bridge to a pure mathematical/statistical background. The divide between them is not clear-cut; the whole point of mathematics is to model the world.)


> aren't all that different from how neurons work

Nobody knows how neurons actually work: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-b.... We are missing vital pieces of information to understand that. Show me your accurate C. Elegans simulation and I will start to believe you have something.

Perhaps in a hundred years, this is the argument: for several hundred years, inventors tried to learn to build an AI by creating artificial contraptions, ignoring how biology worked, inspired by an historically fallacious anecdote about how inventors only tried to learn to fly by building contraptions with flapping wings. It was only when they figured out that evolution, massively parallel mutation and selection, is actually necessary that they managed to build an AI.


> Show me your accurate C. Elegans simulation and I will start to believe you have something.

http://openworm.org/

If you think they are insufficiently accurate, submit a pull request.


> For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings...

To quote Jeff Halwkings "This kind of ends-justify-the-means interpretation of functionalism leads AI researchers astray. As Searle showed with the Chinese Room, behavioral equivalence is not enough. Since intelligence is an internal property of a brain, we have to look inside the brain to understand what intelligence is. In our investigations of the brain, and especially the neocortex, we will need to be careful in figuring out which details are just superfluous "frozen accidents" of our evolutionary past; undoubtedly, many Rube Goldberg–style processes are mixed in with the important features. But as we'll soon see, there is an underlying elegance of great power, one that surpasses our best computers, waiting to be extracted from these neural circuits.

...

For half a century we've been bringing the full force of our species' considerable cleverness to trying to program intelligence into computers. In the process we've come up with word processors, databases, video games, the Internet, mobile phones, and convincing computer-animated dinosaurs. But intelligent machines still aren't anywhere in the picture. To succeed, we will need to crib heavily from nature's engine of intelligence, the neocortex. We have to extract intelligence from within the brain. No other road will get us there. "

As someone with a strong background in Biology who took several AI classes at an Ivy League school, I found all of my CS professors had a disdain for anything to do with biology. The influence of these esteemed professors and the institution they perpetuate is what's been holding the field back. It's time people recognize it.


> As Searle showed with the Chinese Room, behavioral equivalence is not enough.

The Chinese Room experiment doesn't show only that. It also shows how important is the inter-relationship that exists between the component parts of a system.

We're reducing the Chinese Room to the Chinese and the objects they are using such as a lookup table. But what we're missing is the complex pattern between the answers, the structure and mutual integration that exists in their web of relations.

If we could reduce a system to its parts our brains would be just a bag of neurons, not a complex network. We'd get to the conclusion that brains can't possibly have consciousness on account that there is no "consciousness neuron" to be found in there. But consciousness emerges from the inter-relations of neurons and the Chinese Room can understand Chinese on account of its complex inner structure which models the complexity of the language itself.


I'll bite. Tell us, concretely, what is to be gained from a biological approach.

Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.


>Tell us, concretely, what is to be gained from a biological approach.

Here it is from the horse's mouth: http://youtu.be/15sh05wrQ6Y#t=16m34s


I'm short on time, so please forgive my rushed answer.

It will deliver on all of the failed promises of past AI techniques. Creative machines that actually understand language and the world around it. The "hard" AI problems of vision and commonsense reasoning will become "easy". You need to program a computer the logic that all people have hands or that eyes and noses are on faces. They will gain this experiences and they learn about our world, just like their biological equivalent, children.

Here's some more food for thought from Jeff Hawkins:

"John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be, intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this:

Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters. Mind you, the directions say nothing about the meanings of the Chinese characters; they only deal with how the characters are to be copied, erased, reordered, transcribed, and so forth.

Someone outside the room slips a piece of paper through the slot. On it is written a story and questions about the story, all in Chinese. The man inside doesn't speak or read a word of Chinese, but he picks up the paper and goes to work with the rulebook. He toils and toils, rotely following the instructions in the book. At times the instructions tell him to write characters on scrap paper, and at other times to move and erase characters. Applying rule after rule, writing and erasing characters, the man works until the book's instructions tell him he is done. When he is finished at last he has written a new page of characters, which unbeknownst to him are the answers to the questions. The book tells him to pass his paper back through the slot. He does it, and wonders what this whole tedious exercise has been about.

Outside, a Chinese speaker reads the page. The answers are all correct, she notes— even insightful. If she is asked whether those answers came from an intelligent mind that had understood the story, she will definitely say yes. But can she be right? Who understood the story? It wasn't the fellow inside, certainly; he is ignorant of Chinese and has no idea what the story was about. It wasn't the book, 15 which is just, well, a book, sitting inertly on the writing desk amid piles of paper. So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent. (Searle made it clear he didn't know what intelligence is; he was only saying that whatever it is, computers don't have it.)

This argument created a huge row among philosophers and AI pundits. It spawned hundreds of articles, plus more than a little vitriol and bad blood. AI defenders came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese and when it doesn't. Its behavior doesn't tell us this.

A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did, but my understanding occurred when I read the story, not just when I answer your questions. A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions. The Chinese Room, Deep Blue, and most computer programs don't have anything akin to this. They don't understand what they are doing. The only way we can judge whether a computer is intelligent is by its output, or behavior.


First, I don't feel this answers angersock's question concerning concrete applications of cognitive neuroscience to artificial intelligence.

Second, despite running into it time and again over the years, Searle's Chinese room argument still does not much impress me. It seems to me clear that the setup just hides the difficulty and complexity of understanding in the magical lookup table of the book. Since you've probably encountered this sort of response, as well as the analogy from the Chinese room back to the human brain itself, I'm curious what you find useful and compelling in Searle's argument.

I remain interested in biological approaches to cognition and the potential for insights from brain modelling, but I don't see how it's useful to disparage mathematical and statistical approaches, especially without concrete feats to back up the criticism.


Yohui, on an iPhone but will do My best.

Traditional AI has had 1/2 a century of failed promises. Jeff's numenta had a major shakeup over this very topic and has only been working with biological inspired AI for the past 3 years. Kurzwell also has just recently come around. Comparing Grok to Watson is like putting a yellow belt up against Bruce lee. Give it some time to catch up

In university I witnessed first hand the insitutional discrimination against biological neural nets. My ordinal point was that google could use the fresh blood and ideas.


You took the wrong lesson from the Chinese Room. behavioral equivalence is enough, and the Chinese Room shows that behavioral equivalence isn't possible to achieve through hypothetical trivial implementations like "a room full of books with all the Chinese-English translations"


" the use of statistical models that have no biological basis."

this is irrelevant

This is like saying a computer using an x86 processor is different, from the point of view of the user than an ARM computer, beyond differences in software

Or like saying DNA is needed for "data storage" in biological systems and not another technology

Sure, you can get inspiration from biology, but doesn't necessarily mean you have to copy it.

""I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, "

It's not really a fault of statistical translations (more likely quality of data issue), even though it has its limitations. Besides, google translation has been successful exactly because it's better than other existing methods (and Google has the resources, both in people and data to make it better)


I think that the google translator did pretty good on that fragment.

Garbage in, garbage out! If you use 'I' in a sentence fragment when you mean to use 'We' then you can't really blame the translator for getting it wrong.

'We need to meet up' is a sentence with a completely different meaning from the incorrect and semantically confusing 'I need to meet up', it really does sound as if you need to meet up to some expectation.


In further defense of Google, "I need to meet up with him" translates as 我需要与他见面.

If someone wants to attack Google's Chinese translation, it should be over snippets like 8十多万 or its failure to recognize many personal and place names which could easily be handled by a pre-processor. Google has never been competent in China in part because of their hiring decisions, but this isn't Franz Och's fault.


Obviously Google translate is not error free, nor is any statistical translation system going to be comparable to a human translator in the very near future, but you're underestimating the current development of statistical translation. Granted, I'm not a native speaker but I think "I need to meet up" is not even a sentence with proper grammar. Underlying model probably predicted something like meeting (satisfying) requirements due to the lack of an object in the sentence and context. Situations like this where the input is very short and noisy is obviously going to be a weakness of statistical systems for a long time to come, but looking at technologically how far we are from mastering biological systems, I think it's safe to say this is going to be the way of doing it for a while, and will be very successful in translating properly structured texts if proper context can be provided. Currently statistical translations have (almost) no awareness of context beside some phrase-based or hierarchical models. Many people are probably not factoring in the fact, that with exponentially more data, and exponentially higher computing power, a model can utilize the context of a whole book while translating just a sentence from that book - which is actually still much less than what human translators utilize in terms of context. While translating a sentence, I might even have to utilize what was on the news the night before to infer the correct context. We are currently definitely far from feeding this kind of information to our models, so I'd say this kind of criticism towards statistical translation is very unfair.


"We need to meet up" also translates incorrectly "我们需要满足". In fact, I did not originally use a fragment, I wrote a full sentence that Google repeatedly incorrectly translated. I only used a fragment here to simply my example.

To avoid the wrath of the Google fan boys, a better example would have been the pinnacle of statistical AI : The category was "U.S. Cities" and the clue was: "Its largest airport is named for a World War II hero; its second largest for a World War II battle." The human competitors Ken Jennings and Brad Rutter both answered correctly with "Chicago" but IBM's supercomputer Watson said "Toronto."

Once again, Watson, a probability based system failed where real intelligence would not.

Google has done an amazing job, with their machine translation considering they cling to these outdated statistical methods. And just like with speech recognition has found out over the last 20 years, they will continue to get diminishing returns until they start borrowing from nature's own engine of intelligence.


You are exhibiting a deep misunderstanding of human intelligence.

Ken Jennings thought that a woman of loose morals could be called a "hoe" (with an "e", which makes no sense!), when the correct answer was "rake". Is Ken Jennings therefor inhuman?


You do realize Ray Kurzweil is behind the initial technology of Nuance, which Apple uses for Siri now.


That's roughly correct, but IIRC Ray sold that company 30 years ago; it later went on to buy Nuance, and subsequently quite a few more speech-related companies before hooking up with Apple for Siri. So while your comment is correct, I'd be surprised if any of that initial technology was actually being used for Siri.


> Sure, he did some interesting work a long time ago, before he got weird.

You know, it would be wonderful if Ray Kurzweil actually works on software/hardware projects, and he's just hush-hush because he doesn't want to release experiments. Maybe he does more than writing books and speaking at conferences, and he secretly provisions ec2 clusters to experiment with Hadoop or whatever. Maybe he's not just some old geezer that pops lots of pills, maybe he's an old geezer that pops pills and writes Go.

At least, that's what I tell myself to not be as angry about his "prediction from a distance" branding.

On a somewhat related note, http://heybryan.org/fernhout/ has some old emails someone sent to Ray, exploring his lack of involvement in the open source transhumanist hardware/software community.


Thanks in part to the popularity of his books, movie, and speeches, Kurzweil now knows pretty much every AI researcher in the planet

Um...

More important question: how many AI researchers respect the last 20 years of his work?


He's a "connector", a large company need people like him. Even if he was a sub-par engineer, and I bet he's not, he would still be a valuable hire, especially if they want to rebrand themselves as an "AI company".


re-brand? Google has been an AI company since 1998.


I think Peter Norvig may have been the one to invite him to give this speech at Google last month:

http://www.youtube.com/watch?v=zihTWh5i2C4




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: