Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia CEO says Google is the only customer building its own silicon at scale (cnbc.com)
222 points by lawrenceyan on Aug 16, 2019 | hide | past | favorite | 162 comments



Nvidia really lucked out with VR, crypto and AI all happening at once and all just happening to need exactly what they make (surly they can’t have seen all that coming deliberately?)

I wonder if consumer VR would have faired better if gamers didn’t have to compete with miners and data centres for chips and had more reasonably priced cards a few years back.


Sure, but the really lucky break IMO was a decade of INTC repeatedly punching itself in the face w/r to manycore computation. I was at Nvidia from 2001 to 2011 (and now back) and I spent 2006-2011 as one of the very first CUDA programmers. It was pretty obvious within a week or two that this technology was going to be huge, and I more or less made my career with it.

But instead of stepping up to the plate and igniting a Red Queen's Race that would have benefited everyone, INTC first tried to discredit the technology repeatedly, then they built an absolutely dreadful series of decelerators that demonstrated how badly they didn't understand manycore. Eventually, they gave up, and now they're playing catch-up by buying companies that get within striking distance of NVDA rather than building really cool technology from within.

Now if someone threw a large pile of money at AMD again, things could get really interesting IMO. But the piles of stupid money seem biased towards throwing ~$5M per layer at the pets.com of AI companies these days.


Nvidia's coup was in getting people to switch to a different programming model and rewrite their code to achieve the necessary performance. Intel, especially in upper management, is stuffed with people who assumed that was impossible. And faced with new competition from GPUs, the only acceptable response to management was the many-x86, "you don't have to rewrite your code to get performance" approach which didn't actually work out.

It's not that Intel doesn't have people that recognize the issues, but rather the people who do have that foresight are drowned out by people who don't realize the game has changed. Intel, to be fair, does have the best autovectorizer--but designing vector code from scratch in a purpose-built vector language is still going to produce better results, as shown when ispc beat the vectorizer.

But Nvidia can also get drunk on its kool-aid, just as Intel has been. Nvidia's marketing would have you believe that switching to GPUs magically makes you gain performance, but if your code isn't really amenable to a vector programming style, then GPUs aren't going to speed your code up, and the shift from CPU-based supercomputers to GPU-based supercomputers are going to leave you happy. There's still room for third-way architectures that is anyone's game.


People are all full of foresight until their foresight doesn't work. Which is most of the time when working at the cutting edge of Tech. That's when companies crash and burn. Intel has not, despite missing the boat on big calls almost 10 to 15 times now. What does that say about Intel? People think missing the call is a sign of bad management.

Bad management is when your company evaporates because you make one bad call.

Management gets points for surviving and then fighting back despite being wrong. And when you look at Intel's history, there are few companies on the planet who have managed to do that multiple times. They have a good mix of people who know what they are doing technically AND people who do whatever it takes to keep the company from sinking when those bad technical calls happen.

If Nvidia survives whatever their next bad call maybe expect them to start looking more and more like Intel.


"Bad management is when your company evaporates because you make one bad call."

That's a pretty low bar!


Intel was founded in '68. They've been around since the inception of the integrated circuit. Their war chest of patents and dollars and assets/resources aka foundries, is helpful in soaking up damage from bad calls in a way that no one else can match.

AMD could never made the same gamble as Intel did for Itanium. There's a long technical argument as to whether the world is better off in a technical CPU design sense because of that, but I disagree that it's necessarily good management of Intel that's allowed it to recover from disaster.

The best management can play the hand they're dealt perfectly and still lose. However bad management can play the best hand poorly and still win.


If you don't have a fundamentally serial workload (and usually either you don't or you have a lot of them you can parallelize across tasks) and you are willing to write bespoke CUDA code for that workload, Nvidia is telling the truth.

CUDA's sweet spot lies between embarrassingly parallel (for which ASICs and FPGAs rule the world because these are generally pure compute with low memory bandwidth overhead) and serial (for which CPUs are still best), a place I call "annoyingly parallel." There are a lot of workloads in this space in my experience.

But if you don't satisfy both of the aforementioned requirements and/or you insist on doing this all from someone else's code interfaced through a weak-typed garbage collected global interpreter locked language, your mileage will vary greatly cough deep learning frameworks cough.

Finally, it doesn't matter who's doing it, marchitecturing(tm) drives me nuts too.


They're not lying. To parallelize, you have to code in a different style - though it doesn't have to be CUDA. However, it's easier to enforce the syle in a parallel-specific language, and it can help to support idioms.

Controlling the language certainly helps Nvidia's economic moat.


Intel also got burned by Itanium; though maybe if it had been a many-core design as well the payoff would have been worth it. (Looking (about a generation later, Itanium first released in 2001 and cell development started then) at the Cell processor the PS3 used, the idea was probably around at the time and didn't seem to pay off very well their either...)

Arguably one of the things that Nvidia really got right was learning from those past failures at other companies and making it easier for developers to utilize the platform starting from a standpoint that they were familiar with and helpfully nudging them towards what would run fast in parallel.


> Intel, especially in upper management, is stuffed with people who assumed that was impossible. And faced with new competition from GPUs, the only acceptable response to management was the many-x86, "you don't have to rewrite your code to get performance" approach which didn't actually work out.

I think the big disconnect is thinking they had to make people rewrite code. CUDA often targets entirely new codebases, and in some cases new types applications.

The "rewriting of code" is mostly for things like AV processing and codecs where there was such a sellable benefit in performance it would have been insane for them not to invest the effort.

Intel was doubly hindered here, because they wanted everything to use x86. Intel had trade secret and patent protections from competitors, and a critical mass of marketshare.

Parallel programming was something that had to fit into that "x86 for everything" mindset rather than being a separate/competing technology to x86. The company that pushed winmodems and software sound cards wasn't going to be able to lead the disruption there.


Intel's SIMD autovectorizer against NV's SIMT was like bringing a sword to a machine-gun war. The fact Intel's own ispc beat that too should have shown them there was an entirely different class of weaponry they should've been developing. Not only didn't they respond adequately, they doubled down on xeon phi.. That's future textbook material right there.

Only now, more than a decade later they realize their mistake and try to correct the juggernaut's course. Such glacial mistakes in this industry can cast death blows to even the largest entities.


My pet conspiracy theory is that Intel pushed Xeon Phi so hard in order to sell an expensive but mostly useless HPC system to China. And now they got burnt and are rolling their own tech.


>but rather the people who do have that foresight are drowned out by people who don't realize the game has changed.

They drowned out Pat Gelsinger late 00s, then Justin Rattner and many others Retired in early 10s, the rest is history.


> But instead of stepping up to the plate and igniting a Red Queen's Race that would have benefited everyone, INTC first tried to discredit the technology repeatedly, then they built an absolutely dreadful series of decelerators that demonstrated how badly they didn't understand manycore. Eventually, they gave up, and now they're playing catch-up by buying companies that get within striking distance of NVDA rather than building really cool technology from within.

Reminds me of "First they ignore you, then they laugh at you, then they fight you, then you win".

So many companies have this reaction (e.g. RIM with BlackBerry). Wondering if this is some kind of "corporate instinct".


This is "The innovator's dilemma" material. TL.DR. yes, it's a perfectly rational (but short term biased) reaction that corporations mostly can not resist having.


The thing is, it works 98% of the time, the scrappy upstart gets laughed out of business.

There is a sort of survivors bias in focusing in the 2% and assuming that's the norm. Corporations act this way because it generally works.

Like the OP said though, it does lead to arrogance over time, and that's when a fall happens.


A corporation (the management, that is) also act this way because they know their maneuverability is just about the same as that of the Titanic. So while they (maybe) prepare a response knowing full well they're already late too the party they try to discredit the startup hoping it will at the very least slow it down.

Corporations can't really be as successful at innovating as a startup. A startup is free to build or reshape itself into anything, focus on a single thing, and pivot on a dime. In a corporation the same structures that hold it up and moving are the ones that resist it changing direction or promoting something new. Easy to lose focus and get lost in the red tape.

And that's before you consider the risk a CEO sees in potentially cannibalizing their own (currently successful) business or just throwing money down the drain at 98 losing ideas. Like you said, 2% of ideas may be successful so a corporation would rather let the startup play it out and then buy it if it has potential. Easier to justify to investors.

So corporations innovate when they have nothing to lose and any risk is worth taking. See MS trying to somewhat successfully reinventing themselves after seeing mobile and FOSS booming. Private companies also have an easier time innovating because they have no investor pressure. They may be behemoths but at least they can avoid suffering from the "too many cooks in the kitchen" syndrome.


>because they know their maneuverability is just about the same as that of the Titanic.

I know this to be true, however I cannot understand for the life of me why this is the case.

If I was the CEO or CTO and had, say, 5k people under me, you had better believe there would be dozens of little 3-4 person teams doing hard research on threats and coming up with recommendations to get in front of them.

I mean this is basic 1st year MBA SWOT Analysis stuff.


From what I've read above 150 people [0] things start to break down a bit. Social relationships and coordination break down. You no longer know everybody, decisions aren't based on trust anymore, you cannot maintain a flat hierarchy, etc. The structures that support the "behemoth" with tens of thousands of employees spread across the world make it more rigid. With hundreds of teams, products, services, and managers office politics becomes a very real thing and people start having their own plans and ambitions. People stop pulling together towards the single goal because there is no single goal anymore.

And having lots of teams "innovating" is also not that great. You'll just end up with a stack of 100 great ideas on your desk but only 2 that might make money. Your job is to guess which 2. Any decision you take will be heavily scrutinized by everyone in the company and shareholders. You may just go the safe way, that worked over the past few years and put a bonus on the table.

A 10-20-100 person startup with everybody in the same office and a very flat structure will be a lot more agile. The people are all there for that one single purpose, and the dynamic is quite different. Once the goal is reached many just move on. This provides a very different motivation vs. the typical corporate employee.

[0] https://qz.com/846530/something-weird-happens-to-companies-w...


Even if you know what is coming, that doesn't guarantee you can outmaneuver it. When you are dealing with thousands of people, contracts with hundreds to tens of thousand of customers, and infrastructure based on the assumptions that make your existing business tick, a fundamental change that costs a new competitor $0 to make because they don't have any of that built up stuff could costs a fortune for you to counter.

holding the place of an incumbent has advantages and disadvantages. Sometimes you can't leverage the advantages, and that's when a company can get buried by the upstart the worst.


But all it takes is a 1/50 shot to sink your business or overlook a key feature. That's why it's a real thing, and a 98% success rate is not very good at all.


I was just thinking about it as I wrote it. I think that's correct and glad you brought it up.


> Now if someone threw a large pile of money at AMD again, things could get really interesting IMO.

That somebody might be Nvidia. I believe that Nvidia is still battling and has not yet paid the 1.06 billion Euro fine to AMD imposed on it by the UK courts in 2009. Hearsay claims that it was the similar US fine that basically paid for Zen R&D...


Is there a reason for referring to Intel as INTC and Nvidia as NVDA? Is this just the internal Nvidia jargon or something?

It seems gratuitously confusing for readers, and doesn’t seem to have any benefit I can see.


I always wonder what people who feel it worthwhile to shave off 2 letters from a word do with all the free time they gain.


> Is this just the internal Nvidia jargon or something?

Stock symbols


Yes it’s obvious that these cryptic 4-letter abbreviations are ticker symbols, but my point is that most people refer to companies by their names, not their ticker symbols. I’m wondering if using the latter is something common internal to Nvidia, or if there’s some other explanation.


Possibly a hangover from when sending data across the wire was expensive and being able to uniquely identify companies with just a few letters was a huge cost-saving innovation.


In my experience it is common when talking with people that invest in individual stocks--either personally or professionally.


Why would you say INTC and not Intel? Are you deliberately trying to be confusing?


Are you hiring?


Do you mind putting an email in your profile (or by reply)? I'd like to get your opinion on something but would rather not ask here.


I don’t think it affected the adoption of VR much. OTOH it might be very beneficial in the long run that VR had a slow start. Probably the most dangerous thing to happen here is if customers are forever scared away due to bad but expensive early experiences.

As far as I can tell, the hardware development continued and the next wave of VR will have much higher quality and more performant GPUs to make a good first impression.

It would look differently if the entire VR field would’ve collapsed due to low sales but tbh it looks like it’s maturing slow and steady and that’s how it should be

So essentially high gpu prices might have given the field just enough time to mature with patient early adopters before it goes main stream. At least that’s how I hope it will be


As an anecdote it kept me from VR for a little over a year. Getting a card was pretty annoying, and when it came to the scalping like price increases it was a social dynamic I choose to not participate in.

Though afterwards what got me to uninstall everything was a combination of Facebook/Oculus being untrustable and the low playerbase/match making problem in Echo Arena.


Yeah, I remember setting a SlickDeals alert for a "Titan Xp" and watching inventory disappear because I didn't get to their site within 2 mins.


My wife and I waited until just now to buy our first truly VR capable card, mainly due to inflated GPU prices. I feel like waiting was ultimately the better choice because the capable cards are way better and cheaper now, but I definitely would have bought into the VR market if not for []Crypto .


My own experience with VR was that it is held back by an extremely clunkly interface. Nobody wants to wear those huge headsets for long periods of time, and you can't even move around in the virtual world because of all the claptrap that is connected to it.

I had much better luck getting people to try things like Daydream, which have extremely limited processing power but could do cartoony graphics just fine.

I think the industry really missed an opportunity to start with lightweight AR. Let the users walk around by showing them where they are and overlay things on top. There are plenty of useful applications; many require precision sensing that phones don't have, but there are others that can be done with just an accelerometer and a camera.


I think it's been well-known in the industry since the early days of VR that people don't like clunky headsets. But it's not like you can just will into existence a small, lightweight, standalone headset that delivers a smooth VR experience. There are hard technical problems to solve: inside-out tracking, battery life, heat management, optics, etc. The Oculus Quest is a step on the way there, but even after spending billions of dollars, they still have a long way to go.

The problem with lightweight AR is that with current technology, it's not that useful. If you've ever tried Google Glass or Focals by North, there's not much you can do with them that couldn't be done better with a smartwatch [1].

And if you try to pack a larger FOV display or more processing power into the glasses, you end up with something like HoloLens, and then you've got a similar problem to the VR headsets -- it's probably not something that you'll want to wear for long periods of time (you can't anyway due to battery life), and certainly not something you'd wear walking down the street.

It's not that these are missed opportunities. On the contrary, there are a lot of people who have been working on them for years with billions of dollars spent in the process. But they're hard, and it will take time to get there.

---

[1] The one exception was being able to capture quick moments with the Glass camera -- but of course, that raised privacy concerns.


> Nobody wants to wear those huge headsets for long periods of time, and you can't even move around in the virtual world because of all the claptrap that is connected to it.

This is part of why I find Facebook's Quest so compelling.


Also a Quest owner, and find it incredibly compelling. Very good content overall. Still a long way to go, but as a standalone headset there are some great capabilities that I'm seeing that can't be done with a tethered headset. Check out the new space pirates arena videos. I've already modded the headset , look up frankenquest ( makes a huge difference IMO ). VRCover added. Also unlocked and sideloading.


Ya, the Quest has been one of those lifestyle changing devices for me. Still, you are kind of stuck with limited mobility even without wires, but at least you can setup the space you use for VR almost anywhere.


You gotta give the CEO credit for investing in CUDA for years before any of these use cases appeared. He knew that it would be massively better at parallel computing than the CPU but it was a chicken-and-egg problem. No one was going to figure out the use cases until it existed.


GPGPU has been around for at least 20 years and the roadmap was clear. Pretty sure it came down to execution.


I remember being in uni in the early 2000's and going to a conference where one of the presentations was how they tricked a GPU into "rendering" an image that was basically a simulation of fire propagation, and how it was faster (about an order of magnitude) than the fastest CPU simulator. They had a few issues because it was complicated, and the texture data type were not exactly what they needed, and, and, and...

And then CUDA emerged.

So it's not like they magically thought of GPGPU -- there were people working on it before CUDA, but yeah, they had the vision and invested the money and hours of work to make it happen.


I wrote a shader to do computation once. It wasn't too bad actually.


> I wonder if consumer VR would have faired better if gamers didn’t have to compete with miners and data centres for chips and had more reasonably priced cards a few years back.

I don't think VR changed the GPU demand landscape at all. Most of that market has high overlap to those already doing PC gaming. And if you're already doing PC gaming, you probably already have a GPU perfectly capable of doing VR.

Consumer VR would fair better if there was a halo game to justify the investment. Entry-level VR still starts at $400, and there's not a lot of games where you go "yes I must try this", and fewer still that are then also VR exclusive to really force you into the VR experience.


Beat Saber is VR’s killer app ATM, especially for non-hardcore gamers and fitness enthusiasts. In fact, I think VR’s main appeal will be to non-gamers, which makes the less powerful GPU on the quest a non issue.


>I think VR’s main appeal will be to non-gamers, which makes the less powerful GPU on the quest a non issue.

Those non-gamers are still want to going to want all sorts of things going on in their VR scenes with framerates high enough that 10% of the population doesn't get motion sick.


Ya, but we are almost already their for rhythm games. The biggest problem with the experience is the latency between action and effect (slicing a block to feeling its impact), which seems to push 100 or 200 ms sometimes, the bottleneck seems to be in general processing ATM.


> VR’s killer app ATM

there is no such thing as a 'killer app' "ATM" - a 'killer app' by definition is one that transforms technology/society and that afterwords, we 'cannot do without' (or at least think so) - eg. personal computers+spreadsheet, smartphones+mobile internet, etc.

If a 'killer app' is 'at the moment', it isn't a killer app because there is a 'next' moment, implying that the previous 'killer app' wasn't 'killer' because we can now do without it


That’s exactly what best saber is. I bought an oculus quest purely due to beat saber. Everything else it can do was/is/will be a bonus!


People bought the Wii because they thought it was transformative. Yet people got sick of standing the whole time to play. Same with VR.


If someone lost 40 pounds playing beat saber everyday, wouldn’t that be transformative to them?


I don’t think there is any significant money in VR today in terms of revenue.

As for crypto and AI: I don’t think anyone saw that coming, but it was a safe bet that, one day, a technology would emerge that could use this kind of calculation power. And that promoting and investing in GPU computing in general was thus a smart thing to do. IOW: making your own luck.


I don't think it's luck. It's just that Nvidia has good strategy and execution to leverage their core GPU technology to a number of different areas that can be benefited with massive parallel processing.

Couple years back when I was researching for stock investment I looked into NVDA's product offering. I used to thought NVDA was only good at video cards but was surprised to find they had already fanned out to a number of different areas with their GPU technology. Basically anything that needs massive parallel computing is a candidate to apply the GPU. From memory, they were into super computing, cloud computing, animation farm, CAD, visualization, simulation, automative (lots of cars have Nvida chips), and of course VR/AI. Crypto I don't think was on their product roadmap. It just happened.


> Nvidia really lucked out with VR, crypto and AI all happening at once

Not really. Team Green is very careful after the crypto kerfuffle to jump on minute trends like "AI," and very reasonably so. The first wave of cookie cutter "AI" companies are already beginning to offload their GPUs.


I see a lot of negative comments about how "AI" is a fad here. While I don't think it's uses are as great as advertised in the business sector, which is where most of this forum seems to originate, AI applications in academic discplines, especially the natural sciences haven't even begun scratching the surface. You can make a fantastic living keeping on top of the latest ML papers and having a degree in a physical science. There are so many obvious problems to be solved by AI algorithms that people haven't even gotten around to applying CNN's to yet. Source: me with 500 paper ideas in my head


As someone in this space, I think that most people vastly underestimate the actual practical considerations in converting a neat CNN or ML algorithm into a usable product. It's trivial to use the newest flavor of ML to write an academic paper and hold up a new high score for a particular problem. It is very hard to make that a usable commercial product, because often 95% accuracy is not good enough, or it's better to have a deterministic failure mode than to have a semi-random one.


scientific research doesn't always relate to 'products' - but scientific research still happens and needs to happen.


I agree with this a lot. When discussing this with someone who has been around ML/Optimization since the 70s he pointed out that it took 20 years for simple things like a Kalman filters to make their way out in to all the places in the world where they turned out to be useful. That is likely true of more modern ML technologies.


I'm still unconvinced Machine Learning can do much here. ML can predict something, but it cannot tell us why. You're only solving half the problem of 'science'.


I don't think they saw crypto but AI and VR was definitely not just luck.


Of course, AI wasn’t just luck. NVIDIA have been a player in super computing way before deep learning wave started. CUDA was released in 2007


Wouldn't it be the inverse? That these fields are happening because of the graphics cards advancements?


> if gamers didn’t have to compete with miners and data centres for chips and had more reasonably priced cards a few years back.

Except in the short term, doesn't increased demand for chips generally decrease unit prices, rather than increases them? Do you really think there was such a large shock, and prices rose so fast, that this permanently hobbled VR uptake even years later when chips are cheaper than they would be in the absence of crypto and DL?


It’s not luck as much as it was realizing early on that parallel computing was going to be big and getting CUDA out there before the world needed it.


Highly doubt that crypto mining affected VR adoption. The technology simply isn't there yet to be mainstream. It will be in 5 years, though.


VR always seems to be just 5 years away.

I'm starting to come around to the idea that it's here now and the size of the market is just pretty much what it's going to be.


It might be longer than five years, but the market has massive potential. I've owned a bunch of headsets, and I demo them to anyone who wants to try one. The reactions go like this: initial shock and awe (more and more so as the tech has improved), followed by childlike glee and fascination. I've shown skeptics an Apollo 11 recreation, telling them to just try it for five minutes, and no one has spent less than an hour yet.

They come out amazed and wondering why these aren't everywhere. Then they ask how much it costs (~$4000) and their excitement vanishes.

You can get a setup for much less than that now, but it's blurrier, slower, uglier, more nauseating, and lacks important features like finger tracking. It won't be smartphone-level for a long time, if ever, but once the tech reaches affordable levels, I'm convinced there will be a larger audience.


Out of curiosity, what $4000 VR setup are you showing people?


Valve Index $1k

Vive trackers $300

2080 Ti $1200 (I suppose I should count this by its new "low" price, but I got it early)

Overclocked i7-6700k and high end motherboard, closed loop cooler with better fans, ssds, psu, case, ram etc ~$1400

= ~$3900

The important parts are the Index, the 2080 Ti, and a CPU with high single-thread performance. If you lose the trackers (more trouble than they're worth, really) and go more budget on the other parts you can put together something equivalent for under $3000, but not by much.


I would guess Vive Pro/Valve Index with an expensive PC and a bunch of accessories.


VR (and AR) is amazing to demo. I had the reactions you listed the first time I played the blocky pterodactyl game in 1991.

If they can get rid of the headset requirement, then I think the potential is almost limitless.


Honestly, so far I am convinced that VR will be the next 3D TV or google glass... a fad.

It will make users go "wow" at first, then they toy a little with it, then recognize that there just isn't that much great stuff you can actually do with it (as a recreational user) especially considering how clumsy and annoying the gear is and will remain for the foreseeable future.

Sure, it will still have a following, and there still will be current and new special purposes where the technology actually makes sense, but I cannot imagine it will see true wide adoption on smartphone or even TV scale.


Have you tried VR?

Glass & 3D TV never added anything meaningful to the mix. They were just extensions of technology that already existed. VR entirely changes the paradigm of how we interact with computers.

Even if it's "only" VR gaming that takes off, that is an entirely new medium of storytelling for artists to explore. We don't see those often.

Remember that DOOM was more popular than Windows - games are often all the system seller you need.


There's no way VR is just a fad. It's unimaginable that our interface with electronics is going to be a 2D screen forever.

Yeah, it's clumsy and annoying now, but I don't think it'll be for much longer. The Quest is super usable already.


VR is a fundamentally different technology than both 3d TV and Glass (which is very successful in industry, and AR in general will be very valuable in those applications going forward).


Yes, Glass and the few competitors are somewhat successful in industry. That's what I meant with "special purposes"

Still the number of units sold are probably reported in 10Ks or 100Ks not in millions let alone billions of units.


AR might be the winner. In addition to the clunkiness of the gear that you mentioned, it's a bit disorienting (and sometimes unsafe) to be removed from your present environment.


I mean, technically we don't who Satoshi is. Maybe it was Nvidia all along.


A well known Googler once told me that if you aren't building you own silicon, you don't know security. This is one of my main reasons for being all in on GCloud.


You aren't building your own silicon (Google is), so you haven't exactly solved that problem. You're trusting that Google is doing what you want.

Besides, they haven't exactly replaced their supply chain and brought everything in house. Much of their hardware is coming from the same sources as their competitors.


If you aren't auditing your dependencies, you don't know security. But we don't do that either, we outsource it.

This a reasonable thing to outsource: let someone qualified build their own.


But their custom silicon allows them to mitigate many of the risks.


Given the recent hardware-level security issues that Intel and others have suffered, I think it's safe to say that security is hard. Just building silicon doesn't make you a security expert, clearly.


> Just building silicon doesn't make you a security expert, clearly.

What about building silicon and have the project zero level security experts in house?


But do they though? Does project zero review all products made by google on top of the rest of the web? I doubt their oversight happens at the silicon designing stages. Maybe once it's out they battle test it, but I doubt it'd happen before that.


Project Zero is a tiny subset of Google's security team. And while they do have an unusual concentration of extremely competent security researchers, it's not like they are the only competent security researchers/engineers in the company. Similar research happens internally, at all levels of the stack, including designing silicon (e.g. Titan chips that were shown publicly).


Google's Elie Burzstein and Jean Michel Picod gave a fascinating talk at Defcon last week about silicon-level SCAs: https://elie.net/talk/a-hackerguide-to-deep-learning-based-s...

Disclosure: also work at G.


Its possible the Project Zero team consults or throws ideas to other teams. It is possible Google is able to recruit additional people of similar caliber for the silicon design teams.


Intel and AMD have security experts too.


I'll just say this:

Samsung has a project zero team that secures their smart TVs for obvious reasons.

A friend of mine cracked all their security with his ass on home couch and now works for them as an external security expert.

I can cyte one of those "senior security expert architect" from Samsung during their meeting with my friend:

"It's not possible you found a level zero vulnerability in our software, we are the best of the best".

Few minutes later that security architect was fired..

TL;DR inhouse experts often dont mean much, there is a whole world of even better experts who sit on couch at home and hate corporate world..


Sometimes I wish this was Reddit, because I would love to send this comment over to r/thathappened.

I sincerely doubt your entire story, especially the "few minutes later that security architect was fired" portion.


You are right, I have no way to prove that what Im saying is true (beside posting here their contract which would be considered as infringement of it). Still a funny story which helped me look different at security in big corporations.


Didn’t the NSA tap Google’s backbone? Everything was unencrypted all they needed were taps at willing backbone providers. [1] Who needs to beat the silicon when you can just tap the pipes. If Bill Barr’s recent encryption speech is ever realized silicon wont matter.

[1]https://www.washingtonpost.com/business/technology/google-en...


And Google responded by encrypting data immediately.


Google also responded immediately to getting hacked by the Chinese - no more Windows laptops or PCs for developers 10 years ago.

The security of Chrome is astounding (and certainly far better than Apple or Microsoft).

The only black stain is Android... (Edit: disclaimer: I use Android)


And proceeded to become the industry leader in "don't trust the network" defense in depth.


What other choice did they have though? Post the Shrug ¯\_(ツ)_/¯ emoticon on google.com and carry on?


I bet that's exactly what most companies did. Most enterprise WANs are probably still not encrypted today and it would bankrupt their IT departments to do it.


They can't win here on HN can they ? Mistakes happen. It's how you respond that counts as well.


The problem with acting on that statement is the assumption that Google will use their knowledge of security to protect everything their customers / service consumers want protected, such as privacy.

Google has a very fundamental conflict of interest w.r.t. privacy that stems both from revenue (advertising) and product development (e.g., labelling personal data acquired from consumers). Google will almost certainly act against their data product producers (individuals) best interests based on its very deep requirements to obtain private benefits from not protecting them.


Google's privacy controls have been very good over the years.


How can this be asserted in a transparent way? Are you aware of any of Google's internal practices and use of data that have been audited by an external entity and published for review?

How would you rate Google's use of Location History to tell advertisers when you visit their store in terms of "very good privacy controls"?

"The AP learned of the issue from K. Shankari, a graduate researcher at UC Berkeley who studies the commuting patterns of volunteers in order to help urban planners. She noticed that her Android phone prompted her to rate a shopping trip to Kohl’s, even though she had turned Location History off.

“So how did Google Maps know where I was?” she asked in a blog post ."

"At a Google Marketing Live summit in July, Google executives unveiled a new tool called “local campaigns” that dynamically uses ads to boost in-person store visits. It says it can measure how well a campaign drove foot traffic with data pulled from Google users’ location histories."

"Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience." [1]

https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb


There's a huge difference between Google Maps telling advertisers when you visit a store and Google Maps serving you advertisements based on where you go without telling the advertisers where you are. Google, the creature of code and databases certainly knows where you are at all times just like your cell phone provider does. But if any advertiser or human being at Google knew that it would be a huge breach of privacy.


There is a very large industry focused on "re-associating" so-called anonymized information back to PII. Store visits are very easy to re-associate, and ad targeting can be used to de-anonymize people as well.

It is virtually impossible to certify that any given disclosure, no matter how grouped, anonymized, fuzzed, etc. is incapable of being re-associated given other data and time.

I also do not subscribe to the rather expedient definition that your privacy hasn't been violated as long as no human has seen the data. That's an unsupportable claim. As long as my privacy is only a millisecond away from anyone's view on whim, mistake, or trivial disclosure, or some automated system has made some decision that affects me based on my private data I have been violated.


The more I read things like this, the more convinced I am that the Apple premium is worth every penny, just for the privacy payoff compared to Android.


Except maybe if you live in China I guess.


Lucky for me I don't...


True if you mean "control" in the sense of "package and sell".


Google apparently* has good, accessible privacy controls, however by using them to increase your privacy, you effectively break many of their apps. They are designed to work with your personal information.

* I have no way of knowing if they actually delete any personal data when I ask them. I suspect that they just don't show it to me anymore.


This is not true. Deleted data is indeed deleted. Making sure deleted data is actually deleted is super important to Google and any other company because of GDPR compliance.

This is the kind of thing that you go over laborious detail in over privacy design docs at Google to make sure you have it covered.

Source: I'm a Xoogler.


That's cool, but my privacy isn't about compliance. The government can't keep up with the likes of Google. The top-level comment was about the alignment problem between my interests and Google's. I'm sure there's plenty of cases where the data meets the government's definition of "anonymous" but it really isn't. Also I'm sure that there are cases that data is kept for law enforcement even though I've been told it's gone.


Memories of a talk in grad school where we had a guest from Google (also an alum of the school) presenting. He had been with Google from early days and had a hand in their biggest products, including Gmail. It was a very good talk, and I don't mean to disparage it, but someone from the audience asked him a question about user privacy and he answered it by describing how good Google's security is. Like, he didn't quite see the difference between privacy and security. It was illuminating.


AWS has Nitro, Microsoft has Azure Sphere, HPE has iLO, etc.


Nitro is crazy cool. I don't know if Google has anything publicly known that's equivalent.


Google has Titan for security but for I/O they have tended to do everything in software, presumably for the flexibility. But the new GVE might be a Nitro-style NIC.


This is ... not correct. Not by a long shot.

Unfortunately, not a lot of it is public.


I see this kind of comment from Googlers a lot, but either document your architecture or suffer the misinformation from people trying to understand it via reverse engineering. You can't have it both ways (I mean you probably can because Google, but you don't deserve to).


Public, private, whatever, doesn't matter. I thought Google was going to beat everyone to the cloud after I/O 2008 but ... nothing but slow progress. I don't know how The Book Seller and The OS Maker could leave Google - master of scaling, data and servers - in the dust in Gartner's Cloud Magic Quadrant, but they have. They have the chops by far; where's the bottleneck, sales?

I'm really disappointed, but I have hopes Kurian will shake it all up and move things up and to the right.


Part of the problem is that the rest of the world moves too slowly. They were using containers and SDN at planet scale when nobody even knew what that was. App Engine has been a PAAS product for over a decade. But everyone wanted IAAS instead because of inertia. AWS has first mover advantage. MS profits because of their existing enterprise relationships. GCP is making big moves and is growing quickly.


I'm okay with occasional misinformation, so ¯\_(ツ)_/¯

I just correct it when i see it, and don't worry about it otherwise.


AWS also has AWS Graviton processors (Arm based)


Can you elaborate on this please, or link me? I am genuinely interested - do they build their own servers or something?

I know they lay down their own network cables.


They certainly build their own networking gear (1) and their own TPU silicon, as well as laying down their own cables and even transatlantic cables (2). Then there are the pixel phones and laptops etc.

I cant readily find a link, but their rack-mount servers are I believe "custom" designs just for them (presumably done in-house too), although from what I know they use off-the-shelf CPUs from Intel and AMD (there was an announcement very recently that they were using Epyc now for example) & GPUs etc.

It would not surprise me if only the x86 CPUs & GPUs were the only external things they use, and I bet they're looking at their own custom ARM chips for certain workloads (like they use their custom TPUs for certain workloads).

1 - https://www.wired.com/2015/06/google-reveals-secret-gear-con... 2 - https://www.theregister.co.uk/2018/07/18/google_dunant_cable...


The book "In The Plex" confirms your statements about them designing their servers in-house.

But they probably use standard chipsets.


You can see from recent news about speculation of backdoors in hardware that any sort of trust has to start from the bottom up.



Do you think they had a special team for black box testing of critical processors ? I'd be curious to read how they assessed security from vendors.


Or you don’t have the resources that Google has.

This statement holds water only to a handful of companies that can actually afford to build their own silicon.


You seem to be assuming that the message was “if you understand what is going on, you will choose to build your own silicon”—if so, then, yes, it overlooks threshold resource needs to do that.

OTOH, the intended message could be “you can't fully understand the issues without the insight gained by building your own silicon”, in which case it would be just as true of companies without the resources to build their own silicon.


Security is about managing risks in a world where resources are limited.

Building your own silicon is the last thing you do to ensure security even if you are Google, you only do that when it’s the last broad viable attack vector left for your adversaries to exploit.


Most people use those handful of companies, so it holds a lot of water.


I’m not saying it doesn’t make sense to Google, I’m saying it’s too broad of a statement to be generalized.


Poor amazon and microsoft, unable to afford to build their own silicon


The world isn’t Amazon or Microsoft, and even for them it’s borderline questionable if designing silicon at scale is the best use of their resources at this point.

As I’ve stated already it’s the last thing you do when you think about security not the first.

When the security posture of your code, configuration, facilities, supply chain etc. is so good that the only attack vector left for adversaries to exploit to compromise your organization in a sufficiently broad manner is the hardware is when you start thinking about building your own silicon.

Or when ofc there are actual business needs for this e.g. you want to be independent of other SoC/ASIC designers or there aren’t any solutions that meet your needs.

Which also comes to the actual point other than the Google key/hsm solutions that look more of a rebadge than a home grown design Google is focusing on things like the TPU due to business reasons aka $$$ not security.

Google isn’t going away from Xeon/EPYC or x86 any time soon, and I find it very questionable if anyone would make a security argument against NVIDIA GPUs as far as Google goes since Google even wrote their own driver stack and CUDA compiler.


Intel can afford to do its own silicon and they are not going really well security wise right now.

But that was not the message, is about understanding risks.


Doesn't AWS make their own NICs to support the security features in their custom network stack?


Do they build their on NICs for security or because it’s the cheapest way of segregating network traffic?

As in relying on logical separation of client traffic rather than physical one so they don’t need as many physical ports, switches and most importantly network cables(often the highest actual cost in many data centers as far as networking goes)?


You can get that with vlans.

Buying gold plated cables?


Not at AWS's scale. You can only have 4094 VLANs.


Per switch. Much much more if you use vxlan. My point is that they're not doing it for separation as normal nic's are capable of that.


Per LAN not necessarily per switch, overall getting out of the 16bit tag limit takes a lot of work.

I’m not sure if anyone actually uses VXLAN yet.

You didn’t actually made a point because you haven’t provided proof that what Amazon did for AWS wasn’t done because of operational requirements.


And if you’re building your own silicon, you don’t know business. Which explains the pathetic market share of GCloud, and the source of your statement.


Is Apple not counted because they're not favoring Nvidia at the moment?


I think Apple is not counted because this article was only about datacenter stuff, and Apple doesn't factor in there yet.

It's not to say that Apple doesn't matter to nvidia as a customer or potential customer, just that it has nothing to do with this article because Apple doesn't have as much datacenter footprint... yet.

But just as a point of scale, Apple spending $10B on datacenters globally over 5 years is less than Google spends in a single year in just the US.


And on the flip side, if this was an article about consumer gpus for computers, then Apple would probably feature in the article as a customer (or obvious non-customer), whereas nobody would even consider mentioning Google.


Apple has basically bailed out of building modern developer workstations for the time being because of their rejection of Nvidia. It's so frustrating. I miss the perfect tool that my 2013 macbook pro was when I bought it. eGPUs are not an option and neither side is supporting drivers anymore. Eventually something like CUDA might come to the platform and they'll get back in it but IMO Nvidia holds so many of the cards right now it's crazy. The next couple years are going to be an interesting shift after nearly a decade of amazing workstation standardization.


It depends on what you’re developing. I have 0 need for anything except an on chip GPU for my development. That’s only because I developed for the web. If I need horsepower on the go I can VPN into my home workstation or server. If you’re doing ML or even Video editing a multi-GPU desktop will be so much faster than any portable laptop.

However, I agree that eGPUs aren’t a solution. I had a bad experience with Nvidia on my 2011 MBP but my 2017 MPB with an AMD chip is as good as the 2011 in its day.


I lucked out a few years ago and ended up at a startup that was focused on scaling GPU computation for a web service. I've spent 15 years being a web/mobile developer before that. First Perl/PHP for about 5 years and the last 10 mostly Python/C/C++/Obj-C/C#/Go.

What I can say about that and the current thing that's happening to computation is that we will all be doing code for vector processors soon. It's a sea change and the effect of the throughput is an edge for anyone who can use it now and in the near future. That's what this article is about. Google is the only other company making vector processors for compute at scale and they have the BEST tools for people making use of them in their cloud service even though generally they are provisioning Nvidia GPUs.

AWS doesn't ship computers and is lagging behind in their compute services. Nvidia with CUDA changes the landscape in a crazy way. You might not care about it now, but having an understanding of it is, IMO, critical to anyone that plans to be working on computers in the next 2-10 years unless you have a really untouchable position. Even if you think your position is untouchable, you might be in for a shock when someone blows the doors off your business logic with CUDA and you can't catch up.

Regardless of GPU choice, "write once" (or close to it) cross platform compute shaders[0][1][2] are coming in 2020 and there's no way anyone is going to bump CUDA out of being at the front of that.

[0] https://en.wikipedia.org/wiki/WebGPU

[1] https://www.khronos.org/vulkan/

[2] https://github.com/9ballsyndrome/WebGL_Compute_shader


That's exactly what I thought as well


For people that are interested in Nvidia and AMD's strategy, there is an interesting podcast about it: https://ark-invest.com/research/podcast/nvidia-podcast-crypt...


> Notably forecasted the global Crypto crash of 2k18... an opinion against the grain in his field.

Was his field /r/bitcoin? I'm pretty sure the entire world saw that one coming.


Hindsight is 20/20 and all but - I'm sure most everyone knew it would crash (just like todays stock market) but the speculation was on when.


Even for someone who correctly predicted the when, how can we know if it was luck or skill?


>how can we know if it was luck or skill?

Easy, it's luck.


Hopefully higher competitive pressure on Nvidia including from Intel joining high end GPU market, will finally push Nvidia to upstream their Linux driver or at the very least to unblock Nouveau, to use GPU reclocking properly.

Google mentioned open source drivers as one of the reasons for picking AMD GPUs for Stadia.


There are many good reasons to embrace better integration with open source. I think the benefits like helping users with custom/broad distro needs and increasing the velocity of collaboration with the ecosystem and partners outweighs factors like 'competition.'

The cogs do turn, albeit slower than people prefer.

https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O...


Stadia is going to be huge for open source. I just hope both AMD and NVIDIA will provide a virtual GPU driver like Intel offers for it's embedded graphics. That might actually be the end of Window's dominance.


The only customer of NVIDIA, contracting NVIDIA to fab chips for them.

Microsoft's also been developing and using ASICs in Azure. I guess they're just not contracting NVIDIA for any part of the process.


>The only customer of NVIDIA, contracting NVIDIA to fab chips for them.

Nvidia doesn't own fabs.

The article says nothing about Google contracting Nvidia to make chips for them, only that Nvidia is not particularly concerned about competition from Google.


Itd be interesting to see where Google fits in the list of the worlds largest computer manufacturers. I suspect that if they made their numbers public they'd be way higher up than anyone expected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: