Hacker News new | past | comments | ask | show | jobs | submit login
Google to Buy Artificial Intelligence Startup DeepMind for $400M (recode.net)
352 points by jamesjyu on Jan 27, 2014 | hide | past | favorite | 199 comments



I remember that they had an impressive demo at the NIPS Deep Learning Workshop in December (probably a highlight for me and a few people I talked to afterwards) of a reinforcement learning agent playing Atari 2600 Games where the input consisted of an image that was fed through a Convolutional Neural Network. There is a paper on Arxiv that describes the approach ("Playing Atari with Deep Reinforcement Learning"[0]) but unfortunately I don't think they made the videos available.

Deep Mind has a whole bunch of talented and serious people so this is an exciting acquisition.

[0] http://arxiv.org/pdf/1312.5602v1.pdf


I actually read this paper a couple weeks ago as part of a deep learning reading group that I co-run. While several of these authors are household names in the RL community, this paper was not actually that impressive to me.

The only real "deep" learning here is that they used a GPU library and stochastic gradient descent to perform Q-learning updates on a network with 3 large hidden layers. It was an interesting application paper, but I suspect that the Google acquisition is for something more novel than this work.


> surpasses a human expert on three of them

I hazard it's not very impressed with your Space Invader score, either.


We worked on a similar experiment two years ago (deep learning + reinforcement learning algorithm + some innovations, to learn to play Atari 2600 games). We obtained similar scores in the games we tested but we did not submit any paper because we considered that the scores were not good enough. In particular, for Space Invaders, you can easily get 600 points by hiding behind a shelter while continuously firing, and never learn how to avoid the bullet.

So, I was not impressed by their results on Space Invaders.

Overall, we struggled to learn long-term strategies (finding pure reactive strategies is easy) and to learn to avoid bullets. They did too: "The games Q*bert, Seaquest, Space Invaders, on which we are far from human performance, are more challenging because they require the network to find a strategy that extends over long time scales."

=> that's the real challenge...


This deep learning group you mentioned. May anyone join in?


Sorry, it's for UT Austin PhD students only.


I suspect google wants to port this to their D-wav, probably sell API access to it. Rent an AI



And what's interesting is they seem to be hiring a whole lot more [1]. If they are hiring a "growth hacker" they must be close to having something that's out in people's hands.

[1] http://workinstartups.com/job-board/jobs-at/deepmind-technol...


So, Google's list so far:

- Some top ML talent: Geoffrey Hinton, Sebastian Thrun, Peter Norvig, Jeff Dean, Andrew Ng.

- One of D-Wave's quantum computers to establish their 'Quantum Artificial Intelligence Lab'.

- Creepy robot maker Boston Dynamics.

- A stack of other robotics companies: Schaft.inc, Industrial Perception, Redwood Robotics, Meka Robotics, Molomni, Bot & Dolly, Autofuss.

- DeepMind, obviously.

Am I missing any?


Ray Kurzweil


Zagat


Nest to apply ML techniques to home automation.


pretty sure a lot of cat herding is about to ensue


Ohhh, did Google buy them? Or did the AI figure that the best place to launch world domination from was inside Google so it forged a letter from Larry and Sergey offering to acquire the company? :-)


Larry Page has been replaced by an android clone since the so called "throat operation". Don't you think it's a little weird that they've started buying all sorts of robot and AI companies lately to create the ultimate machine?


AI vs Aliens from Space vs Zombies!! The future is exciting!!!


I see someone else has read Avogadro Corp ( http://avogadrocorp.com/ ).

I found that book to be one of the most unsettlingly plausible premises for an AI. In particular I liked that it didn't go the typical Hollywood idea of AI as being a "soul/homunculus conjured into a machine" but rather a very effective decision-making agent.


Yup, that is an awesome tale.


We're not commenting at YouTube level. We're in sheer awe at someone having the chutzpah to dump hundreds of millions of dollars into realizing their scifi fever dreams.


I purchase my fever dreams in dogecoin.


Here's a talk the founder gave in 2010 for Singularity U about combining neuroscience and machine learning.

http://vimeo.com/17513841


It seems that Google has now gone full Singularist. They jumped the shark when they hired Kurzweil.


I don't really understand the objective of those like Kurzweil. What do they see as the benefit of the singularity? Wouldn't a drastically more advanced intelligence just dwarf and dissolve any lesser intelligence that tried to meld with it? It would be complete ego death.


>Wouldn't a drastically more advanced intelligence just dwarf and dissolve any lesser intelligence that tried to meld with it? It would be complete ego death.

A lot of the singularity people think that superhuman AI in inevitable whether they work on it or not, but if the first AI isn't a friendly one (as opposed to a paperclip-optimizer that happily turns the planet in paperclips) then you don't get a second chance.


Also called "a self-fulfilling prophecy"; you make something happen by the simple statement "it will happen".

It's also a smart move for your career: You can create your own job and attract lots of funding for your research, or sell your business for a lot.


It sounds like a piece of fearmongering nastiness until you think of it in terms of mathematics rather than psychology. A so-called "artificial intelligence" is just an extremely sophisticated active, online learning agent designed to maximize some utility function or (equivalently) minimize some loss function. There's no term in a loss function for "kill all humans", but neither is there one for "do what humans want", or better yet, "do what humans would want if they weren't such complete morons half the time".


Or as Eliezer nicely phrased it once in a paper[0],

> "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

[0] - http://yudkowsky.net/singularity/ai-risk


This objection implies there is something better than turning the planet into paperclips.


> What do they see as the benefit of the singularity?

Living forever / getting rid of death, fixing world problems, countless amazing technological achievements, colonizing the galaxy. In probable order of happening.

> Wouldn't a drastically more advanced intelligence just dwarf and dissolve any lesser intelligence that tried to meld with it?

Not necessarily; that's an actual research field.


> Not necessarily; that's an actual research field.

http://intelligence.org/research/, also Omohundro's and Bostrom's works.


The universe still dies, so you still die. A trillion years is still quite finite and small.


"Small" compared to what?


Infinity. 80-ish years might not seem like much to you, but to an insect that only lives a few days it might as well be a trillion years. The point I'm trying to make is that living a really long time is not living forever, and even if your death date is the same as the universe however many trillions of years out from now, your still going to die eventually, and still going to worry about it.


None of that disputes the fact that longer is still better (obviously with all else, like quality of life, being equal).


Well, I'd love to worry about it for all the billion additional years. Maybe we'll even figure a way around the problem in this time? Our understanding of laws of physics is far from complete.


Imo, there is no infinity in the real world.


If by "real world", you mean the world of things, then I would agree with you. But the world is the totality of all facts, not the totality of all things.


Basically this: http://www.smbc-comics.com/?id=1968

Keep in mind that creating more advance intelligence than us, while aphoristically wouldn't be difficult, is a REALLY HARD task. I doubt anything will happen within our lifetimes unless it is by some major fluke.


If you think it's inevitable, wouldn't you rather be in control of its launch than someone else?


The concept that AI will be many times human intelligence is result of biases. The microprocessors went from 4 bit to 64 or more bits and speed went from MHz to GHz. It may not be possible extrapolate the progress of microprocessors in the case of intelligence.


You have no more basis for claiming this than AI reseachers have for claiming the opposite. More research is required, and it will be a well spent effort if the probability of a bad outcome in AI development is nonzero.


To be immortal is his main goal. Also have a look at any modern tech business and you will see more advanced intelligences serving less advanced ones (but good at schmoozing).


The idea is to build a "friendly AI" which doesn't dissolve us.


This is a goal of which Google has made no acknowledgment whatsoever.

So let's hope they haven't gone quite as nuts as they seem to have gone.


As a goal, I'm sure they're thinking about it. Unfriendly AI is part of pop-culture, after all. As a hard mathematical problem, not so much (I was following Ben Goertzel's OpenCog project for quite some time, and he seems to have this view, even though he's affiliated to the Machine Intelligence Research Institute, which has Friendliness as its principal concern). The trouble is that if Friendliness is the slightest bit harder to implement than just doing the most naive thing, then this approach will fail.


Question from someone ignorant on the subject: if we can select and breed friendlier and friendlier animals (has been easily done with wild foxes and other wild animals, not to mention dogs), why would it be hard to creat AIs with friendliness and non-violence at its core?


Because we don't have a clue what are "human values" and what it means for an AI (or animal, or frankly, other humans) to be "friendly".

We can sweep this problem under the rug in case of pets; it's not as if they could start making weapons and organizing an army. With human-level intelligence, we already have a friendliness problem with fellow humans, and in case of potential superhuman minds we need to be damn sure that it doesn't do something stupid (in our opinion) like taking all the resources of our planet and using it to tile the solar system with paperclips.

As the saying goes, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

See also http://wiki.lesswrong.com/wiki/Paperclip_maximizer.


Perhaps to illustrate the concept with an example: is George W. Bush considered "Friendly". What about Obama? Would an AI who behaved like them (according to some Humans, trying to bring peace and security to a country) be considered "Friendly".


Does that mean that having "our way" overrides what AI knows is in the best interest of the planet and humanity? Will this AI just lend itself to be used in harmful ways by dubious and invariably less intelligent people?


While interesting philosophically what the AI would do without restrictions is a hard enough question that placing that restriction on it is just as impossible to answer.

I'd put it in the same ball park as "what would that grurple (kind of a greenish purple) colour look like if we removed blue from it?". Sure you could say orange but the nature of grurple is undefined enough that it would be really hard to conclude that you're right.


The trick is to program the AI in the way that it thinks it's in our best interest for it to be friendly. Read up on utility functions.


you want humanity to be the terminal phase of sentience in the galaxy?


Fair point. I guess it's a bit like a self-sacrifice.


Or having kids.


That's probably a lot more intimate. It will probably seem more like unicellular to multi.


I don't think a public company answerable to investors and other stake holders will ever work toward a SciFi futuristic concept.

But AI definitely has a lot of benefits. Start from self driving cars to automation in manufacturing industries. And that is just the very beginning. Google's line of businesses can benefit immensely from such work. There is endless money to be made. And it is better you grab it when it is a low hanging fruit.

Most of Kurzweil has to say about robotics, nanotechnology and AI. Has a direct benefit to humanity in the immediate future. Humans can have longer lives, the world can have a smaller population, many diseases we know can be eradicated. Problems like hunger, pollution, disease etc can be solved. The list is endless. Why wouldn't any one want a share of that business?

And yes the whole AI taking over the world and making us extinct thing- It won't happen anything like just turning on a switch. I believe even a run away super intelligent will still need biological life forms for their own very survival.

Like some one mentioned in this thread, copies of your self will continue live in the cloud and such copies will provide a great wealth of insight for the machines themselves to survive.


> I don't think a public company answerable to investors and other stake holders will ever work toward a SciFi futuristic concept.

I think Google has a proven track record of working on whatever they find interesting/beneficial to humanity, not necessarily minding shareholders - cf. self-driving cars, Project Loon, 10^100, whatever was that stuff they did in clean energy business (I recall them working on wind energy or sth?).

If anyone is going to pull off something like an real AI, I'd bet it will be Google - it's the only company I know that has the size, manpower, minds, money, know-how and a healthy attitude toward profits (bettering human kind > short term gains) all in one place.


There is no objective, the singularity is just a highly involved live-action role-playing game for a group of mathematicians.


At best, a copy of you living in VR gets to live until the sun explodes. Which at least is better defined than "heaven".


Well, when you scratch the surface, it's not really any better defined than heaven. You're gonna have to spec out 'copy', 'you' and 'living'.


We have some ideas on how to make "copy" work, and what "you" and "living" is. We also know that we can in principle achieve it by our own strength as a civilization. We can see the path from "here" to "there". Which seems much better defined than heaven.


Or a copy of you living in VR gets to be tortured endlessly until the sun explodes. Which at least is better defined than "hell".

[Thanks to Messrs Banks, Stross and Vinge for that particular cheery thought].


1) The sun does not explode. It just swells to a red giant. This happens quickly for a star, once the process starts, but it's still much longer than a human lifespan.

2) Presumes our intelligence is deemed valuable enough to devote those resources to.


and does it matter? I mean, since it's a copy, to your biological "you", it's just no different as if they put some other random person in VR and told you it was you in there.

I find the 'upload your brain to a hard drive' thing silly unless they figure out how to have your biological consciousness seamlessly move to the VR, like it was taking some bus ride and ended up in this new place.


In the old mans war books they cover that by having the old and new bodies staring at each other while they move the consciousness. So one minute you're staring at a new body then you're staring at both then gradually the vision of the new body fades out and you're left staring at your old body.

I think something similar is the only way you're going to get people to want to do this. Though whether or not that is possible is way beyond me.


Sounds like an idea. If I really had to.

I would still have a similar problem with "teleporters" (of the Star Trek kind), though.


Is ego death such a bad thing? I have heard accounts from people who claim to have experienced it and enjoyed it.


> I don't really understand the objective of those like Kurzweil.

I suppose AI will one day be able to explain this to us?


To many, love.


He did mention the theory by Jeff Hawkins only very shortly. I think it is very relevant in this context because he gives high-level theory about how the neocortex works.

I can really recommend the book by him, On Intelligence, where he explains it quite understandable.

It explains why the brain has developed the different hierarchical layers of the visual system and how the same principle works everywhere in the neocortex. It's basically all predictions of time series at different abstraction levels.


Agree! But I'm biased and work for Numenta. Jeff will be glad to hear you liked his book. P.S. Your site is down http://www.downforeveryoneorjustme.com/www.az2000.de


Same talk w/ big slides and tiny Demis: http://chris.ill-logic.com/systems-neuroscience/


Same talk on YouTube if Vimeo is slow. http://www.youtube.com/watch?v=F5PSyu7booU


Coming to the thread late, slides can be found here: http://chris.ill-logic.com/systems-neuroscience/ (plus the video)


$400M for a company I've never heard before is quite surprising. Could someone who participates in the machine learning community share any insights or facts they have on the company? Other than the founders' histories, the article doesn't do much justice to the pricetag.


As far as I can tell, quite a few ML people can vouch for someone at the company (I know someone there second-hand myself), but the details of what precisely they're doing have been kept fairly well under wraps. The broad parameters are "something to do with Artificial General Intelligence (AGI)", which is a field that tries to straddle the fine line between contemporary AI and sci-fi AI, without erring too far in the direction of either too incrementalist to too pie-in-the-sky. More specifically on the "methods" side, they have people who are known in both statistical ML, and in the recent resurgence of neural-networks known as "deep learning".

You can discern some more about their general direction by looking at courses they've been involved with, talks they've given or sponsored, etc., but as far as I know (and I tried to probe a few months ago through a friend who knew someone there) their actual product / business / etc. hasn't really been leaked, or at least not leaked widely enough that I could find out about it.


I'm not so sure their technology is as futuristic as everyone thinks it is. If I had to take an educated guess, I would say it's some powerful AI that makes their knowledge graph smarter. Currently Google's Knowledge Graph uses more structured data sets and depends on a mechanism like this: http://www.zachvanness.com/nanobird_relevancy_engine.pdf

But the real challenge is to make the knowledge graph update in real time and take meaning from something as unstructured as a blog post or an email. And to do something like that requires some really unique AI.

--mjn - I totally agree!


Oh, I'm decidedly agnostic on whether it really is futuristic AI. Just there is some buzz around it, and some of the people involved in it are definitely legit, and have also involved themselves in the "AGI" community, which leads to such speculation (which they've pretty deliberately cultivated). That doesn't prove they've Solved AI for any sci-fi-ish notion of Solved AI. They could just have some good but in the big picture fairly modest knowledge-graph tech, or they could even have not-that-good knowledge-graph tech with great PR! Hard to say without knowing any details.


Deep learning is pretty impressive as a supplement to knowledge engineering approaches.

Google's Deep Learning team were the people who developed the alogithm that discovered cats on YouTube (without training). Presumably this team had something that impressed them.

The weakness to knowledge engineering approaches is that they tend to be fragile - they break badly with small holes in recorded knowledge. The IBM Watson team has a great video that showed how the different definitions of "fluid" and "liquid" meant a correct answer would have been missed if evidence collected in the answer verification phase of the DeepQA pipeline (no relation to Deep Learning) hadn't overridden it.

Edit: Your(?) paper on your (?) relevancy engine is interesting. It seems like an application of skip-grams (which, ironically enough are heavily used by the DeepQA answer verification phase mentioned above).


This has seriously piqued my curiosity. Do you think Google bought the product or the team? $400M is a big chunk of change for ~3 people.

Edit: peaked --> piqued, thanks to spiderPig!


If the article is correct, they bought the team.


Could 3 people be worth that much? This ranks as one of the ten most expensive acquisitions by Google[1].

1: http://venturebeat.com/2014/01/14/where-nest-ranks-among-goo...


Those three are just the co-founders. The whole team seems to be much bigger. The arxiv article linked above cites seven names already.


Article has been updated; company is about 50 people.


Or some patents.


50 people should cost ~$50 million, if it's a very strict talent acquisition, maybe $200 if they're really good. I think as well that there is some hard IP involved, or that they were close enough to a breakthrough for there to be hard IP.


It looks to me like Google really wants to build a monopoly on the best AI people in the world, and is willing to pay out the nose to do it. Given that we're starting to get to the point where AI is going to be useful in a lot of fields, I think it's actually a really good strategy. No one competent out there to build a good competitor to completely autonomous self-driving cars at another company, for example, which will give Google a de facto monopoly on the tech behind something that almost everyone is going to want in their cars asap.

Not to mention the enormous number of innovations they'll likely be able to churn out. Hopefully it's like an AI focused PARC, but with a competent tech company at the helm :-)


And the update to the article says they were competing with Google, Facebook, Baidu... could be google was getting rid of the competition for talent; and gaining ability to attract more talent.

Though if they're expected to do wonderful things, and they've been doing things for years... it seems a certainty that they have already done some of those wonderful things. And hence have something concrete worth acquiring. Which would explain the valuation.


400 millions for a 3 person startup that don't even have a proper web-site and is not in wikipedia seems crazy. But then again Google knows a little more about future value than me.


I think you're mixing up two types of acquisitions here.

There is a difference between Snapchat being worth $3 billion and Nest being worth $3 billion. The former gets the valuation based on users, the latter on talent and intellectual property.

Ditto here: $400 million is not buying you users, it's buying you raw talent and IP. Users can go off to another service in a blink of an eye - IP can't (talent can, but you can often structure the deal so that it won't for some time).

This could still be a terrible deal (I'm sure there are some people at Google still a little sore over Motorola, where the IP was valued far more than it ended up being worth), but for very different reasons.


Yeah, obviously if they don't have a trendy iOS app and a twitter presence, they couldn't possibly be worth anything! /s


piqued* my curiosity. Sorry, had to :)


"peaked" does at least make some sense as a replacement. I'd wager that it'll be common usage in a decade or two.


No facts about the company's work, but Yann LeCun says 3 of his former students + postdocs work at this company:

https://www.facebook.com/yann.lecun/posts/10151812982157143?...

If they're hiring his students, they probably have a high level of talent (speaking as a former -- and present, starting tomorrow -- student).


They have apparently been cornering the market for certain types of PhD and other high-end talent in a bigger-than-niche ML/AI category.


Google should buy Wolfram Research. Wolfram Alpha seems to be pretty decent functionality.


>The DeepMind-Google ethics board, which DeepMind pushed for..

Damn, if DeepMind had to 'push' for an ethics board then that is a fairly bad sign. I am getting more worried.

[0]https://www.theinformation.com/Google-beat-Facebook-For-Deep...


At least someone pushed for it. It's reassuring to hear that the thought was on the radar at all.


This is my thought as well.


What's to say what is an ethical thing to do to an AI? Or that someone decades from now will be ethical? The most ethical thing would be to shut down the program.


FYI, Shane Legg's theses was "Machine Super Intelligence". http://www.vetta.org/publications/


I'm really wondering where you have to go to grad-school for someone to approve a thesis that blatantly speculative.


I skimmed it and didn't think it was that bad - it's kind of interesting to see a thesis that allows reasonably informed speculation. I don't know enough about ML to say whether the core results are actually of any value.

The title is a bit much but it does say "The title of this thesis is deliberately provocative".


I didn't say "speculative" as a bad thing. I meant it more with overtones of "brave" or "ballsy" or, let's just sum up, "heroic".


Ah yes - I did wonder if I had misinterpreted your comment after I posted mine... :-)


University of Lugano, apparently.

I also quote, from the preface, "This research was funded by the Swiss National Science Foundation under grants 2100-67712.0 and 200020-107616. Many funding agencies are not willing to support such blue-sky research. Their backing has been greatly appreciated."


Whatever the uni. He was working at IDSIA, one of the main contributors to deep learning. Also, his advisor, Markus Hutter, did some serious stuff on AGI.


Did you read the paper? I skimmed it, and wouldn't describe it as "blatantly speculative." There are certainly predictions based on facts (or at least, claims which are believed to be facts, but I repeat myself), but I don't think those are out of place in a thesis.


I know Shane Legg from his work with AIXI, the mathematical model of infinite general intelligence. That's some heavy hitters.


I worked with Shane at Ben Goertzel's company Webmind about 12 years ago - a good guy.

A little off topic, but I suspect that one reason to sell themselves to Google is Google's infrastructure, both in ability to easily run very large jobs and their very nice development environment.


The DeepMind website [1] links to cofounder Demis Hassabis' biography on Wikipedia [2]. Read it. This guy is a world-class hacker!

[1] http://deepmind.com

[2] http://en.wikipedia.org/wiki/Demis_Hassabis


He did the AI for Black & White? My respect for him has just increased by an order of magnitude. Black & White's creature AI was nothing short of amazing.


My second-hand impression is that Richard Evans is responsible for most of what actually shipped in the game's creature AI. As the Wikipedia article says, Demis only briefly held the title of Lead AI Programmer on the project.


That's a nice c.v., but he definitely wrote his own wikipedia article.


I'm usually skeptical of these kinds of vanity articles (and seeing many more lately), but this one kind of checks out [1]. It's got edits by many different IP addresses and handles stretching across nine years.

[1]http://en.wikipedia.org/w/index.php?title=Demis_Hassabis&dir...


I guess DeepMind's AI Wikipedia bot works pretty well.


Apparently Facebook also tried to acquire DeepMind. https://www.theinformation.com/Google-beat-Facebook-For-Deep...


Also interesting that Google agreed to DeepMind's proposal to create an AI ethics board. Wonder how Facebook responded to that stipulation during negotiations.


Is Google drunk? Thats $4B in basically two weeks on unproven companies.


They have an assload of cash on hand and the singularity is coming at us like a freight train. I'd think they were drunk if they weren't snatching up everything relevant in sight.


> They have an assload of cash on hand...

I wonder if they managed to pay for DeepMind out of funds that are "stuck" offshore (i.e. earnings from outside the US that can't be repatriated without incurring a big tax bill).


Probably. Deepmind is a UK company based in London.


I never even considered that, probably very true.


"They have an assload of cash on hand"

Companies with excess cash are supposed to pay dividends and/or buy back stock.


I think it's debatable based on your view of a company's purpose: is it to serve its stockholders or is it to serve its customers?


Plus, as buffet teaches, its better to reinvest the dividends. Unless the shareholders think they can do better than the management, in which case they're free to divest.


Google isn't drunk-- it's just Buzzed. It will be fine to self-drive itself home.


Nest is available in every Home Depot, and fairly prominently. It has a product and it must have sales to some degree.

Deep Mind on the other hand...


Unproven to who? You?


There were a lot of pros a DeepMind. For example: Volodymyr Mnih, Andriy Mnih, Alex Graves, Koray Kavukcuoglu, Karol Gregor, Guillaume Desjardins, David Silver, and a bunch more I am forgetting.


I recently watched the interview with Eric Schmidt on CNBC. In it they asked about the singularity (as good as any main stream finance oriented television show can). The key quote I took away from it (and sorry this is from memory) was "We've had the algorithms for AI for years, but the big difference between now, and in the past is we finally have the computer power for it".

My own speculation. One of the key concepts to come out of the experience of translation is "A billion is more than a million". When they thought they processed enough data, it still wasn't enough. They may be scaling that concept even larger. At the same time, quantum computers SHOULD be getting to the point where they pass classical computers, and its generally known that Google has had access to them.

If Roses law is true, I'd speculate that Google is ramping up to take advantage.


> At the same time, quantum computers SHOULD be getting to the point where they pass classical computers, ...

Where exactly would they pass classical computer? Legitimate question.

Because I've read this (http://www.scottaaronson.com/blog/?p=1643) today, which shows quantum computers probably will not be that much better at NP problems than classical ones.


I don't know anyone on the team, but Om Malik was not impressed and he's usually reliable:

https://twitter.com/om/status/427653907766972416

> A $400 million talent acquisition with little talent. That's how Google rolls now!

There's also word that it was more than $400m, perhaps as high as $500m. That's a lot for talent any way you slice it.


How is he qualified to judge ML talent? DeepMind wasn't just a 3 man team, Yann Lecun posted on G+ that 3 of his PhD students were on the team and I've heard that they were recruiting more people.


Exactly. Interesting, Yann LeCun in his G+ post (https://plus.google.com/u/0/+YannLeCunPhD/posts/LnKBPi9cCWm) explicitly mentioned that the acquired ML scientists have been focused in reinforcement learning. Also, they have published ArXiv paper on the subject: http://arxiv.org/pdf/1312.5602v1.pdf.


I can't believe that they would pay $400M for a company without getting something more than just talent. They must have created something very impressive that will be able to plug into the Borg.


Better deal than $3 billion for a company that makes thermostats.


State of the art ML + thermostats ... I smell synergy ...


look how much facebook paid for instagram.


If instagram decided to create a couple of features more.. that would be a real threat to facebook as the #1 social network..

With a lot of money to spend.. this was a good move.

Acquisitions can also serve to kill the competition


Did anybody else read this as "Goodbye Articial Intelligence Startup DeepMind for $400M"?

  Google to Buy
   Google Buy
     GooBuy ~ Goodbye


Upvoted but disagree on behalf of Youtube, Android, Keyhole, etc.

Some exist independently, others are absorbed.


To be fair, Youtube could have been so much more. For example they recently removed the video response feature while claiming the click-through rate was abysmal - it probably was because the focus of Youtube is more like "Newtube". TV now means on demand and broadcast via internet/web, simple and boring as that.


Youtube would have died by itself.

Or imagine what Yahoo would have done to it post acquisition.

I know Apple would have called at iTube & put a price tag of $0.99 on all decent ones :)


I've met one of the founders a while ago. pretty smart people, but it seems to me that the whole purpose of silicon valley is advertising. not a world I want to live in. google establishing an ethics committee for proper AI? give me a break, we know how well that works.


A search on arxiv for DeepMind[1] finds that in the last months they have been publishing some first-class stuff.

[1] http://search.arxiv.org:8081/?query=DeepMind&in=


Google ∩ Nest ∩ Boston Dynamics ∩ Ray Kurzweil ∩ Andrew Ng ∩ DeepMind = Domestic Robot?


= attempt of implementation of sci-fi sounding idea that will be rewarded even if it fails.

It's is good.


Dang. $400million sounds like a lot for landing page. Only kidding. Best of luck to those guys. They sound very talented.


Extracting millions out of acquirees should be done with the least possible investment. Seriously, if you can rook 'em in some way that can't blow back, go for it.


I don't know what this means. I understand the individual words....but not the meaning of your post


They're saying that they think Google was conned - or at least manipulated into a higher price than the company is worth.

This is the Internet, so I'm sure this random opinion is both well-informed and valid.


Ahh I see. Thanks. I trust him. Nobody knows best like an anonymous stranger on the internet.


Programmers might be one of the unfortunate firsts that can be replaced by AI! No programming at all everything self programs itself!


If one can write a program that can generate any program, the world as we know it would be such a different place protecting my job will be least of my worries. This is basically the equivalent of a run away program what can create better versions of itself automatically.

Nearly every human's job is at stake here.


Visual programming ala Yahoo Pipes, with the computer creating the code itself in the background. Would be pretty cool...


And then, programmers will be like john carmack, building spaceships and rockets, while the computer does its job.


The missing piece for Google to become Skynet?


Robotics companies and now AI companies, Google seems to have much higher ambitions than the internet. May be they watching too many sci-fi movies in their 20% time. :)


When you have the capability to achieve dreams as immense as Google can, it's best to reach for the most powerful dreams possible.


Or maybe they're watching exactly the right amount. :D


What do you mean, maybe? The evidence is in: Google's founders have gone full hard-scifi cyber-god take-over-the-world on us.


I am jealous: working on AI _and_ getting rich.


Well, Vicarious is still independent... http://vicarious.com/


Agh! Crap! I almost got a job with them!


I feel sorry for you :(. Are you a developer or a scientist?


I am a bit of a jack-of-all-trades. I generally bill myself as the "&" in R&D, bridging the gap between academic researchers & mathematicians and mainstream software engineering. I.e. I can do a bit of classifier design, but most of my focus tends to be on build systems, automated test systems, parameter tuning, data quality monitoring and all the nuts-and-bolts stuff that most researchers don't want to be bothered with. I am way too OCD to spend my day producing "research quality" code, and too wary of BS for my output to be just research reports and academic papers. I prefer shipping product, if you know what I mean?

I lost interest in DeepMind when I could not figure out what their business was after a couple of interviews. (The startup that I was previously working for had just run out of money). More fool me. Heh.


I still can't figure out what their business is to be frank. I get the research part but not the monetization part. I guess acqui-hire by design.


What impression did they give you during the interviews? In retrospect, do you think that was their whole goal, to get acquihired by Google?


Yes, in retrospect, I think that was the goal. They were interested in me for work on their machine vision product (my area of expertise), but the spiel was very much around how they had built up one of the world's best machine learning research groups. Over the years I have seen many great academics fail to produce much commercially, so perhaps I was less impressed with that than I should have been. :-)


So Deepmind $400M but Snapchat $4Billion? So Google thinks Snapchat is more important?


Not more important, more profitable. DeepMind is an investment, SnapChat is a revenue raiser. And a big ball of users to foist G+ on.


consider also that it's not only Google's opinion that affects the price tag.


The price seems quite low, goes to show how bad most technologists are at bargaining.


Can someone shed some light on the reasoning behind paying 3.2 billion for Nest (thermostats) and only 400 million for DeepMind (artificial intelligence). Is it really that Nest was a consumer, revenue positive company?


Well, if you keep in mind that Google's game is data collection, it makes sense that a product that will give them an even more detailed look into the life of people will help them monetize that somehow at somepoint. (I have been doing lots of macro level SEO research lately and have come to the conclusion that Google is getting ready to make some big moves in the near future.)


DeepMind seems awfully close to DeepThought - maybe its $400m for the answer to life, the universe and everything? Would seem to be a good deal on those terms..


So does anyone know what they actually do in more detail?


https://angel.co/deepmind-technologies-limited

Looks like PayPal (tm) mafia extended family.


How do you know if you are talented enough to pursue an intricate field like Artificial Intelligence? Or is it merely a lot of hard work?


Pick a camp: symbolic or sub-symbolic. If latter then do a neuroscience degree along with a computer science degree. Get an insight while experimenting with biological brain. Make a start-up that capitalizes on your new idea.


Getting 2 degrees sounds like way too much work. How about reading a plenty of books and research papers?

What do you mean by symbolic or sub-symbolic?



Video Games with Emergent AI, and Machine learning will be next mainstream AI thing. Academic AI and Game AI might somewhat merge...


How do we know it was 400M?


Machine learning and big data startups will strike it rich in 2014!


You mean those start-ups that were founded long enough to be one of the first to capitalize on these ideas? Like DeepMind that was founded in 2011?


AI and robotics... I for one am excited.


The question is, what is Google going to do with all these robotics and AI companies? And please no singularity/skynet/robot uprising BS, I'm expecting more from HN than youtube level comments.


Singularity is not BS, and I'm pretty sure they want to make it happen. It would be a shame if Google wanted all this AI talent to improve ad revenue or just for self-driving cars. I for one hope they are not that short-sighted. I think, hope, that with "all these robotics and AI companies", they literally want to build the future of mankind.


Singularity is BS because there's no such thing as "machine surpassing human intelligence". Human intelligence is not a one dimensional measurement that can be "surpassed". There's not much challenge to write AI that surpasses all humans in IQ tests. You can't even say if one human surpassed other human in intelligence, how are you gonna measure machine? In the same way every one of us are intelligent in some specific areas, AIs are and will be intelligent in their own. We, humans, only have many things in common, because we all have the same body, that makes us to have the same physical needs and what results in relatable experiences that we call "common sense". If we, you an I, didn't have same types of bodies, even if we both could speak english, we could barely understand each other. That's why there will never be a point where AI will become "unpredictable". It's just pop culture meme for people that don't want to think it through the details, sitting in the same page with UFO, spirits and other I want to believe crap.


> You can't even say if one human surpassed other human in intelligence, how are you gonna measure machine?

It's not because something is difficult to measure that it does not exist(1). Even if you don't believe in IQ tests, you gotta admit that those people Google just hired are more intelligent than the average Joe. They sure are more intelligent than me anyway. Probably more intelligent than you as well.

In the same way, the idea that one day a machine could be more intelligent than any human being would be very real the day a machine will write scientific papers, program its own code, design high-tech devices, win a Nobel price and stuff like that.

This machine would be more intelligent than Demis Hassabis (the founder of DeepMind) for exactly the same reasons that I can say that Demis Hassabis is more intelligent than me: he does more intelligent things.

1: for example, the famous conundrum "how long is the coast of Britain?" does not suggest that Britain has no coast.


> It's not because something is difficult to measure that it does not exist(1).

No, I don't say that it doesn't exists. What I say is that intelligence is not something to be expressed in one number, it doesn't work that way. Intelligence is more like NxN matrix of numbers, where each number in a matrix represent individual skill in some specific task. As for your example of Google employee and average Joe, what you mean by more intelligent is that the sum of all of that NxN matrix number for that employee is larger than of the average Joe. However if you take some particular numbers, Joe might still have them higher. For the simplest example the average Joe will know his house better than the Google employee who hasn't even been at Joe's house. And the same is for AI, just the number of that supposed intelligence matrix will be completely different than that of human. AI without human body, human body needs and without hormones to control his behavior will never be anything like human to be compared to them.

So in the end what you say I also think is true, especially about machines writing scientific papers and generally doing science already out of grasp for human mind. My problem I guess is the measurement problem.

If someone would say "singularity will be when machines will manipulate mathematical concepts and invent/discover and prove theorems that no human mathematician alive can understand" I would agree. But if they say "machine is more intelligent than any human" I can't agree. That statement makes as much sense as statement "singularity will be when apples will be more fruit than any banana".


> What I say is that intelligence is not something to be expressed in one number, it doesn't work that way.

But nobody in the singularity field says that intelligence is one-dimensional quality! It's a strawman.

It reminds me of a common accusation that "computer people" have subpar worldview because they "reason in 0s and 1s, and the world is not binary", to which I say that actually "computer people"'s view is superior because they figured that out long ago and developed proper methods to quantify and deal with uncertainty.

> Intelligence is more like NxN matrix of numbers, where each number in a matrix represent individual skill in some specific task.

This is also not a good model, because what we usually mean by intelligence are reasoning capabilities, not e.g. motor skills. You don't say about a surgeon that he is smart, because he can manipulate a blade with great precision; we say he's exceptionally skilled.

> For the simplest example the average Joe will know his house better than the Google employee who hasn't even been at Joe's house.

Put Average Joe and Google Employee a house they have never seen before and see which one will learn how to navigate faster - that's a way to measure intelligence. Not the knowledge, but the ability to process and use it.

> If someone would say "singularity will be when machines will manipulate mathematical concepts and invent/discover and prove theorems that no human mathematician alive can understand" I would agree. But if they say "machine is more intelligent than any human" I can't agree.

Saying "machine more intelligent than human" is just a shortcut for saying "machine that is able to reason about the world faster, better, with less biases than human; which will manipulate mental concepts and prove theorems out of reach for humans, as well as invent better technology, tackle human social problems better than humans do, etc. etd.".


> This is also not a good model, because what we usually mean by intelligence are reasoning capabilities, not e.g. motor skills.

Yes, the model is not good, I agree. But your understanding of intelligence incorrectly. You think that intelligence is a rate of learning. Or in neuroscience terms, brain plasticity. So, first, plasticity is largest at birth and gradually decreases as brain matures. This means that newborn would be "more intelligent" than 50yo man. Secondly, a fast rate of learning is not necessarily a good thing. If you ever worked with neural networks, you'd know that when training it, you can adjust at what rate the weights in artificial neurons would change. If you make fast rate, for one, network quickly overlearns, meaning he becomes too specialized and adjusted to exact cases he experienced, and secondly, it can quickly "forget" what he has learned. Slow learning rate makes it longer to learn, but also is more "stubborn" and doesn't give of on old beliefs so easily either.

On the other hand, as you said, knowledge also does not mean intelligence. If it cannot learn from it's mistakes, it is surely not intelligent. So I'd say intelligence is a combination of experience and plasticity.

> Saying "machine more intelligent than human" is just a shortcut for saying "machine that is able to reason about the world faster, better, with less biases than human; which will manipulate mental concepts and prove theorems out of reach for humans, as well as invent better technology, tackle human social problems better than humans do, etc. etd.".

So in the other words machine becoming more proficient in some very specific skill or multiple skills. Yes, that is common sense.

It's not that I disagree with the core idea of singularity, I just find it pointless and unnecessary. Some might say "it brings attention to the field", but I'm not really sure it helps AI research, since it attracts the wrong kind of people that would make actual research.


Self driving cars?

Edit: and search is an AI problem. You have a user entering a string and you have to work out what they expect to see returned.

I think there might be feedback loops in Google's current system, so maybe better AI would help.


Obviously google try to build things for every aspect of our life. Intelligent machines in the our Homes, Cars, at Work. We will not see them in the near future, but 5 or 10 Years google will start a product offensive.


I watched a fascinating documentary about DeepMind's most popular product:

http://www.youtube.com/watch?v=6QRvTv_tpw0




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: