Hacker News new | past | comments | ask | show | jobs | submit login
How Google Wants to Rewire the Internet (nextplatform.com)
250 points by Katydid on July 17, 2017 | hide | past | favorite | 101 comments



I really wanted to understand this article. I tried wikipediaing and googling some of the things I didn't really get (Jupiter is a thing... ok and Andromeda.. riiiight). Then I got to the chart "Conceptually, here is how Espresso plugs into the Google networking stack:", which was totally unparseable by me. All the green things look the same, but one of them is the thing this article is about (Espresso, right?), and Google somehow is represented by a vague dark-grey blob... I just don't get it.

Can anybody help? Am I simply not technically competent enough to consume this article yet?


Here's my attempt at an explanation, after reading the article.

Most companies' networks have edge routers (which sit at the points where they connect to other networks) and core routers (which manage the flow of traffic inside the network. All these routers basically use a standard protocol called BGP (Border Gateway Protocol) which is defined by RFC 4271.

However, BGP was still designed from the view of individual machines making routing decisions and announcing routes to each other that collectively make up the whole Internet. This helps the Internet as a whole be quite resilient – if one network goes down, there are still ways to route traffic through to other networks. Also, since the protocol is standard, you can swap out one vendor's gear for another at will (in theory anyway) as long as you know how to configure it correctly.

But this leads to some inefficiency – for instance, it is very hard to say that a path with fewer hops will lead to lower latency. What Google seems to have done is to make their edge routers into one single "intelligent" network, where the edge routers don't make routing decisions on their own, but feed their data into a central server. This central server can then say stuff like "My peering router in NYC seems to be under heavy load, let me redirect some of my traffic to NYC destinations through the NJ datacenter instead", or something to that effect; while still doing the correct BGP announcements from the point of view of Level3 or whoever is peering.

In short, they built their internal network from the ground up since they are so big they can afford to build custom routing gear instead of using the standard off-the-shelf, standardized setup that a small or medium-sized company uses. The network consisting of their custom edge routers (all the green blobs) together is called Espresso and represented by a light grey circle.


"Central server" for routing. Uh oh.

AT&T used to try to avoid centralization, but ended up with routing controlled from Bedminster, NJ.[1] An interesting comment from AT&T's NOC tour guide is that load doesn't vary much any more. AT&T used to have holiday calling surges and such, but now, in an always-on world, overall load is relatively steady.

[1] http://fortune.com/att-global-network-operations-center/


You can easily employ redundancy for core services. Core functionality doesn't need to be synonymous with central point of failure. For example, you have three healthy copies of the core service running at all times, combined with failover. All software systems have a core function that must work or else the system fails, but that doesn't mean all software systems are centralized in any useful sense of the word.


The problem i would see with centralization of routing is not reliability, but rather susceptibility to censorship; as the old saying goes "The Net interprets censorship as damage and routes around it" - while one could argue that root dns servers are central-ish, they are hosted all over the world, so a single actor can hardly impose worldwide censorship. It is a known fact that malicious actors can manipulate BGP (see Hacking Team), but it is still better than putting all of our collective eggs in Google's basket...

Distributed things though, mesh networks, IPFS - that's what's giving me hope.


It's interesting to know that a traditional ISP like AT&T is heading the same way.

I think Google gets more freedom to try out some of these techniques because people still fundamentally think of them as a website (apart from Google Fiber, they don't serve end-users directly); whereas AT&T, being an ISP, is treated more like a water / power service, in that people expect them to be working by default, and going down is absolutely unacceptable.


AT&T, pre-Internet, had 10 major regions in the US, and switches had a fixed list of primary, secondary, and tertiary routes. The first "centralization" was simply that the priorities in the routing tables were changed every few minutes based on load. But if the central routing planner went down or was unreachable, everything still worked, just not as optimally.

What you don't want is software-defined networking where every new flow goes to Master Control for validation and routing. Some SDN systems do that, and they have a central point of failure and censorship.


Google produces notoriously opaque and high-level articles with regard to its networking systems. You are not meant to get much meaning out of them. Instead you should stand back in awe at the amazing things they are doing.

Nearly every Google paper about Google networks in SIGCOMM is utter trash that would have been rejected in an instant due to lack of novel details if it had come from anyone else. They just brag about scale, utilization, and resiliency without actually giving anything back to the academic community.

Note: This does not apply to all Google papers (e.g. spanner and other DB ones are pretty good). Their networking publications seem to be crippled by either IP concerns or poor academic writers.


It's a pretty terribly written article and that info graphic is also not very helpful. This article is like two people discussing "inside baseball."

The essence of the article is the evolution of Google's SDN(software defined networking) architecture. They have added SDN to their edge whereas previously SDN existed mostly in their Core it sounds like. The Edge SDN architecture being "Espresso" and the Core SDN architecture being "Jupiter."

There is also some fishing about what exactly is Google's network hardware that runs all this.

The following article is far more brief but far more informative in my opinion:

https://www.sdxcentral.com/articles/news/google-brings-sdn-p...

This is a white paper about Jupiter mentioned in the article and infographic:

https://static.googleusercontent.com/media/research.google.c...

And there's some good resources here for just learning about SDN here:

https://www.opennetworking.org/sdn-resources/sdn-definition


I really hate the naming trend over the past 10-15 years of using common words for projects.

I can't really search for Espresso, or Jupiter, or Andromeda without additional qualifiers that I may not know yet.


Heh, that for me culminated when windows released a ubuntu subsystem.

The names in order of popularity when it was originally announced (now they push for WSL)

1. Bash for Ubuntu on Windows

2. Windows Bash Shell

3. Windows Subsystem for Linux (WSL)

Trying googling WSL - it's world surf league. Windows Bash Shell is impossible, so is Bash for Ubuntu on Windows. It's getting better as more and more articles are being written, but god almighty. I wouldn't be surprised if google is lending a helping hand with searches for WSL stuff.


And the 3rd name is backwards. It ought to be called 'Linux Subsystem for Windows'.


As mentioned by others, the emphasis in the name is on the fact that it's a Windows Subsystem. The historical precedent for this name is "Windows Services for UNIX", which similarly provided a Unix environment on Windows NT.


THAT IS IT! I was wondering why the name always felt funky. Although the name they have now is technically correct if you ignore standard noun to adjective conventions.


I believe this, like many things in the windows ecosystem, is deliberately confusing/takes advantage of linguistic ambiguity to create confusion in the observer and create a sort of implied sense or feeling of microsoft superiority...

E.g. having windows 1st sort of inflates the 'primacy' of windows with respect to linux; it is linux that is being made compatible with windows rather than the other way around.

Best things that come to mind are things from the windows UI - like the network privacy zones, having the 'basic' and 'advanced' control panel, etc.. Some things say 'windows is updating your computer' rather than 'your computer is updating windows' or some such... To me they have this sort of psychological undertone of 'always remember how microsoft is helping you make this confusing thing much easier.. you couldn't do it without us'


...or it's missing an apostrophe.

  a. Windows' Subsystem for Linux

  b. Window's Subsystem for Linux
Each option has its own sort of context.


If you're making it a possessive using an apostrophe, then it should be "Windows's Subsystem for Linux" because Windows is singular in that context.


That's not the entire story, unfortunately. There are other rules. For example, Windows might be considered a family name, as there are several Windows Operating Systems. Also, this is a technical name, and technical writing often follows different rules, specifically in cases such as this.

For the possessive form of Windows I would look at the Microsoft Manual of Style, which for me is the fourth edition.

Page 184: Possessive Nouns

--------------

Do not use the possessive form of Microsoft. Do not use the possessive form of other company names unless you have no other choice. And do not use the possessive form of a product, service, or feature name. You can use these names as adjectives, or you can use an of construction instead.

Microsoft style:

  the Windows interface

  Microsoft products, services, and technologies

  Word templates
--------------


The correct form of possessive context for words ending in "s," is with a dangling, trailing apostrophe, and omission of the extra "s."


> The correct form of possessive context for words ending in "s," is with a dangling, trailing apostrophe, and omission of the extra "s."

That is correct (or, at least, nearly universally agreed to) for possessives of plural nouns ending in “s”, style guides are mixed when it comes to other nouns ending in “s”, though the most common rule seems to be to use “’s”; while “windows” is plural, “Windows” as the name of the operating system is a proper noun that is not treated as plural.


I disagree. Microsoft is correct; mostly if not entirely.

Q: It's a subsystem? What kind of subsystem?

A: It's a Windows subsystem?

Q: A subsystem for what?

A: It's the Windows subsystem for Linux.

As another commentator mentioned, there's probably a missing possessive ("'s"), but even if we take the proper name to be more of an adjective modifying/qualifying "subsystem", it's a Windows subsystem, it's purpose is running Linux... the Windows Subsystem for Linux.


But we don't have the Windows Subsystem for Win32, the Windows Manager for I/O, etc., so while the naming is technically correct, it is inconsistent.


Sure there's a subsystem for win32.


"Windows Subsystem ___ Linux" could be fine, but "for" is not the correct word to put in that blank.


Why not "Linux Accompanied Windows"? Or "Still Windows but also some Linux"?


But Linux is the one part that is not present. It’s GNU software running in a Windows subsystem. GNU/Windows?


Not true in this instance. They implemented the Linux ABI, which means software can run on the linux subsystem without recompilation.

Cygwin allows running GNU software on Windows, but it doesn't implement the Linux ABI. Therefore Cygwin requires software recompilation.

The big advantage of WSL is that you can run native linux binaries.


#sigh# Clearly the joke was not funny if it had to be explained.

For decades, Richard Stallman has been calling it GNU/Linux, because the OS “aside from the kernel” was GNU, and Linux is the kernel. https://www.gnu.org/gnu/linux-and-gnu.en.html

Now, we have a Microsoft system that runs the Linux binaries, which are GNU according to Stallman, but not running them on the Linux kernel. You’re naming the entire system after the one component that is missing. By the same logic that normal Linux should be GNU/Linux, Linux containers on Windows should be GNU/Windows.


The whole point is having the Linux ABI. There are half a dozen ways to run GNU on Windows.


It's a filter bubble issue. When I first started Ruby, it had been popular for 10+ years already. Many articles.

Despite that, it was still impossible to search for. Look for Ruby, get gemstones. Look for gem, get even more gemstones. Eventually Google figured it out and now it's almost impossible to search for gemstones. If I search for gem almost all search results are about Ruby things. Same when searching for Ruby.

Even "rails" now returns almost only Ruby on Rails stuff.


Have you tried searching when logged out of google? I'd imagine a rail worker searching lots of train stuff would generally get the right results when they searched on their account.

But yea, worth bearing in mind when naming stuff.


Perhaps, but under incognito I still get Ruby on rails if I search for "rails". I do get train info if I search for "rail".


Try the same searches from an incognito window. It could just be that Google learned what Ruby you are searching for.


I just did:

1. Download Ruby

2. Ruby entry on wikipedia (about the gemstone)

All the rest on the first page are about the programming language in some way or the other (Rails, various books, various tutorial resources, Stackoverflow, trending on Github)


Indeed, it would've been far better if they called it WinBuntu Bash or some other similarly distinctive name.

...then again, WINE isn't exactly unique either.


The internal name is lxss. I'm sad they didnt make that the default because it's very googlable.

That said WSL tends to work if you pair it with whatever your issue is.

IMO the most ridiculous name in this whole story is their github repo, Microsoft/BashOnWindows. I mean, bash has worked fine on Windows for decades, and it doesn't require an entire subsystem at all :-) WSL isn't half bad compared to that.


Also when did EmcaScript conveniently became ES?

when i see someone mention ES6, I think its elasticserch but its not


To me it seems a return to the an older tradition, of names being names rather than descriptions (which inevitably fail to describe).

Using common words for names is also ancient and inevitable. In the modern age there is a practical argument against it -- as made up nonsense is more googlable. But people making up names don't really care about that in any visceral way.


While your original point is valid, I did get decent results with "Google Espresso", "Google Jupiter Network", and "Google Andromeda Network". Though, in today's world, your Google results may look vastly different from mine.


I hate it too.

Especially if they take something with a somewhat awesome name. It waters the meaning down. My go-to example is Terraform, a glorified configuration manager that has absolutely nothing to do with the process of terraforming.


I cant really help because Im not great at explaining things, but for some fundamentals, you need to understand BGP and label switching (MPLS).

The cleverness here is that they effectively turned their routers in label switchers, with the label traffic is sent to determined by backend infra, not the router itself. They still have to run this over BGP of course, as thats how the internet works but it is an interesting approach for optimizing return traffic routes.

Edit: espresso is the name of the stack they use to make the label switch decision, its not a protocol or industry standard


Disclaimer: I worked on this technology.

I agree, article could be improved.

In a nutshell, Espresso system allows (logically) centralized, fine-grained control of how bits leave Google's network. Specifically, each server has a tailored (for that server) routing table that allows the server to select a specific port on a specific peering router for a given packet to exit to. The routing tables are updated very often and fast (e.g. to remove references to links that went down).

This makes it very easy to move bits around. You want to shift 42.2% of link X traffic to link Y? Just update some routing tables in end hosts. It also allows for different applications to the same destination to use different exit links (e.g. low latency - exit somewhere close, high bandwidth - exit where we have bandwidth). Bonus benefit: router can be simpler, it doesn't need to do any routing anymore, it just needs to follow instructions on each packet on which port to forward the packet to.

Does it make it more clear? Ask me anything.


I understand as much as is to be expected after looking at someone else's (Google's) huge software stack in an area that is not my specialization (networking). My only tip is to not actually worry about grasping everything at first and just start to build up lists of facts and associations (e.g. "Espresso is a networking stack to do routing over the public Internet", "Espresso pulls the routing intelligence outside of individual routers into a server pool"). With time and after studying more of the conceptual background, everything starts to make sense typically.

There are definitely some Googlisms or something being thrown around though. Can anyone tell me what a "hypercaler" is?


> "hypercaler"

Probably a typo for https://www.hyperscalers.com/


It's extremely high level and vague presumably by intention. If you don't have a grounding in networking, BGP, traffic engineering (MPLS, SR, related concepts), you won't be able to make out much about it. If you do, you'll just infer that they're doing a form of "egress peer engineering". On top of that it sounds like they're potentially using whitebox switches at the edge and terminating BGP peering sessions on servers which are programming the forwarding state on these white boxes.


I have little to nothing expirience in BGP and routing. It seems they cloudified the routing layer by making their machines part of some 'abstract router cloud' that can be reconfigured at will(centrally).


> But running a fast, efficient, hyperscale network for internal datacenters is not sufficient for a good user experience

It will never be sufficient. A good backbone infrastructure doesn't compensate for the fact that the majority of users don't have ISP choices especially for fast speed fixed/mobile networks.


Hence, Google Fiber and Project Loon.


"one out of four bytes that are delivered to end users across the Internet originate from Google"

Such a mind blowing statement. Wonder when (if) they'll hit one-in-three bytes.


Facebook or Amazon may have a say in that, a lot of those 1 in 4 bytes are video from Youtube.

That said, those bytes still need to be delivered, Google really don't have peers in this stuff, apart from maybe Facebook.


YouTube doubled in hours of video watched last year going from 500m to 1 billion hours daily. FB total video is 100m hours. So Google added 5x FB total. Netflix was about 128 million hours daily.

So Google should get to 1 out of 3 pretty quickly.


Youtube has some really good content creators, and apparently they are getting paid. I've actually subscribed to a few channels finally that I look forward to seeing new releases in (CGP Grey, YSAC).

My 7yo son is obsessed with the whole Let's Play phenomenon, and other associated minecraft content. I've looked up some of these people on Social Blade[1][2], and some of these people are likely making millions, but I can't fault the system. While the content of some isn't exceptional, it is professional, and some of these people keep up that pace for multiple content releases each day.

1: https://socialblade.com/youtube/user/thediamondminecart

2: https://socialblade.com/youtube/user/matthiasiam


Content creators are mastering chopping up content into separate videos with enough continuity and hooks to keep people clicking next video, next video, next video...

It's quite scary actually what the current and future generations are being conditioned to do.


Have you heard of soap operas?


I agree the parent poster was being dramatic. Soap operas, however, make you wait until the following day to see what happened.


How is this different to a YouTuber? They (at least the ones I have seen) release their Videos daily (or less often) as well and sometimes with different Content on different days. So you might even have to wait 2 Days or even a week to get the new episode of the Series you where watching.

It's true that you can binge on a series if you are new to the series (youtube or otherwise) but if you are up to date that is no longer the case.


That's a good point. I find that there is an almost limitless supply of entertainment online these days, as opposed to 5 channels and 6 Disney VHS tapes in my youth.

I don't think I could have resisted the compulsion to be on YouTube for all of my free time. I am lucky that there is a ton of social pressure on me to spend my time doing other activities.

It could be a personal problem, unique to me and a small amount of other people.


netflix last year was reported at 37% in NA, so I suspect it will continue to be a contender as it gets more traction globally.

http://appleinsider.com/articles/16/01/20/netflix-boasts-37-...

'peak download internet traffic' whatever that means.


The content may be Netflix, but it's delivered over CDNs, and much of that infrastructure isn't operated by Netflix. That's what the "originate" in the original statement is likely implying. Content ownership and access vs content delivery at a network level.


We (netflix) operate our own CDN. See https://openconnect.netflix.com/en/


Yes, but my understanding is that quite a bit of it is still served through partner CDN services, is it not? I was aware that Netflix served at least some of the content themselves (and growing, I assume), so was attempting to imply that the 37% noted for content access for Netflix may not equate exactly to 37% network delivery for Netflix, as they measure different things. I didn't mean to imply none of that should be attributed to Netflix's own network.


No, we serve 100% of video traffic from our CDN.


Ah, that seems to be fairly new. I found info that it was over 90% a little over a year ago[1], but I wasn't aware it was that high at that time either. Congratulations on getting everything self served!

1: https://media.netflix.com/en/company-blog/how-netflix-works-...


We've been 100% for video since before I joined over 2 years ago.

I think you misread that article with the 90%. That is talking about the proximity of Open Connect CDN nodes to customers, and how much traffic is served from "directly connected" Open Connect CDN nodes, and how much traffic is ... less than directly connected OpenConnect CDN nodes.. (sorry, I'm a kernel guy, not really versed in WAN stuff).


Traffic during peak hours. Peak demand is what really matters for infrastructure.


Need to be able to store datafiles as youtube videos... e.g. ISO images in youtube vids... allowing for a view of a youtube vid to stream a download where, should everything be prioritized for vides/and youtube-google enuring play... that file transfers can be massive and fast even if neutrality BS occurs.


At least for Google Photos, they don't usually give you the exact same bytes back. (At a very basic level, that protects against some attacks and also against your scheme.)

Of course, you could add enough coding theory to make it work.


Good point. I'd be more interested in connection stats than bandwidth, that would far more fun to read.


What's mind blowing is that despite all that, they're still at the mercy of Comcast, Verizon, and at&t.


That is a very US centric view of the world. Google operates far beyond Comcast Verizon and AT&T.


People in the US have a disproportionate amount of money so serving them ads provides Google disproportionate value.

Could they exist without the US market, Maybe, I don't know. But I can't imagine they want to go without it.


Especially when you think of services like Netflix, Spotify, and online games which deliver lots of bytes that definitely don't originate from Google. Without these services, this must be closer to 1/2.


At least in the case of Spotify, I expect a large (and growing) chunk of their traffic to come from Google infrastructure: https://news.spotify.com/us/2016/02/23/announcing-spotify-in...


> Wonder when (if) they'll hit one-in-three bytes.

I guess if Netflix ever switched from AWS to Google Cloud Platform...


99% of Netflix bytes are largely served from Netflix's own CDN + a cadre of third party CDNs. The bytes to/from AWS isn't trivial by itself but relatively negligible compared to video traffic.


There was a guy on Australian radio today saying that international traffic from Australia and NZ has decreased by about 70% over the past few years thanks to the ever-increasing usage of CDN's putting data, in most cases, within the ISPs network. Was something interesting I hadn't thought of.

He also mentioned that in NZ miserly data caps are really not implemented any more, since people just aren't using expensive transits as much (any NZ people confirm?)

Of course you wouldn't know about any of these savings by following the pricing of internet services in Australia. And the government seems intent on making anyone who cares for even basic privacy needing to tunnel their entire connection outside Australia (not to mention content providers and their geoblock rules).


Bandwidth is obviously not the only cost of ISP. The volume of BW even though internal CDN has to get the data from somewhere. For example in the case of GGC (Google Global Cache), for each 1 Gbps of output 100mbps on average is pulled from the internet. So for someone who is serving 10gbps ggc traffic (not that high really) - they are pulling 500mbps from the internet. Of course, a large stack of GGC servers needs power / DC rack space - which has monthly costs. As per NDA with Google - you can't actually sell Google CDN bandwidth separately from your internet BW (though many do). Also because of this large volume of traffic depending on your setup, you may have to upgrade your internal network capacity including all the way to the last mile for a decent user experience. You also need to upgrade your fibre network - if you were previously renting data (100mbps) from your distribution core providers - for small medium and often large ISPs, now you have to either increase the capacity (can be very expensive depending on which part of the world you are) or you rent a whole fibre core capacity.

People often mistakenly associate the main cost of ISPs is BW. But it's the network equipment, the constant upgrading/replacement, an army of support (both physical and phone), and the last mile connectivity is the major costs. BW itself is a small part of the cost.

Getting both Akamai and Netflix cdn has similar issues, but are harder to qualify/get than GGC cache hardware.

Yes, all these CDN providers are providing us hardware and mostly excellent support in helping to set them up - in most cases, it's setup and forgets. But for them, it's a one time cost, but for ISPs it monthly recurring cost that we can't dump on users without losing customers, we can't charge the CDN providers, we just have to silently bear the cost and hope that in 15 years if we have enough customers, we might be profitable someday.

Everyone loves to hate ISPs, and I am sure a lot of it has some justification. But sometimes the cost and pain are real. That's why there is so little competition in this sector because it involves a huge amount of upfront and recurring investment and really long waiting time before you can hope to finally make some money. Then you have brain-dead regulations from the government (not all countries of course) and aggressive competition from well-connected existing companies with deep-pocket.

There is a reason why Google fiber doesn't roll out all over the country, because even with all their money cache and network expertise and well connection throughout the government - what a daunting task it will be for them to ever be profitable. Your Ad sales can only subsidize your last mile cost for so long.


Most NZ ISP's have an ADSL unlimited data cap plan for around $95, and lower cap plans for cheaper. VDSL is a bit more, and fibre a bit more again, topping out at around $130 for 1000/500.


Right, I expect that the Netflix traffic numbers are for the last-mile only (CDN to customer). Google's numbers should be from a 1e100.net server to the customer across multiple networks (unless Youtube has some localized CDNs, which would complicate the math).


Google's CDN is huge and extensive. See https://peering.google.com/#/infrastructure


Both Netflix and YouTube have appliances that they provide to ISPs to put inside their networks, serving as extremely local CDNs.


Google must be broken up.


Somewhat related: Google's efforts to speed up TCP.

"BBR: Congestion-Based Congestion Control" http://queue.acm.org/detail.cfm?id=3022184


I guess this planetary naming convention is part of a tie in with the notable Pluto Switch:

https://www.wired.com/2013/03/big-switch-indigo-switch_light...

https://www.wired.com/2012/09/pluto-switch/


Can someone ELI5 the difference between this and https://azure.microsoft.com/en-us/services/expressroute/? Is the technology principle the same?


Azure ExpressRoute = AWS Direct Connect = Google Cloud Interconnect

All 3 are ways to connect your private datacenter or on-premise location (maybe your corporate office) directly to the cloud provider's network over a fast private link, like an industrial-sized VPN. This way you can access your cloud servers/services over this private connection rather than the public internet. Lots of companies do this if they have sensitive data or have their own "cloud" system running VMWare or something and want to augment with public providers.

This article is discussing lower-level infrastructure describing Google's global network components, from load-balancing, routing, internal datacenter connections to servers, etc.


Very cool, thank you for explaining. Is VPN the primary purpose or could it in theory be extended to work as software routers (as opposed to physical switches your traffic goes through) for all internet traffic once connect?

Let me know if I'm way off base in my understanding.


Well VPN = virtual private network, and that's exactly what it's creating. What you use it for is up to you.

You can access all Google Cloud services over this link and but you can also change your routes to point all internet traffic over this link too (you'll need some cloud VMs with public IPs to be the exit nodes). Not sure what you'd gain from doing that though.


Thanks for explaining.

Do you know if such a VPN is fundamentally different from something like OpenVPN, which can also be used site-to-site?


Functionally it's the same outcome, however these are managed solutions that the cloud provider (usually along with a networking partner) setup and maintain. This leads to much better reliability and speed and it's also not running over the public internet like a typical VPN so it's better for handling sensitive data access.


ExpressRoute is just Azure's version of AWS VPC. They're complicated VPNs for cloud services.

Espresso is a system for controlling BGP peering using application-fed metrics. It's basically complicated source routing.


Thanks for the response.

Couldn't ExpressRoute (and not familiar with AWS VPC) in theory be configured at multiple global locations to handle internet traffic?


>Amin Vahdat: Yup, pretty much traffic directors. Absolutely.

This quote stands out for me and makes me uneasy.


I don't know, feels like a massive waste of resources and if Google is doing it simply because it can. It's probably much cheaper for everyone else to handle latency/throughput problems on the client side and application level, sticking to all the traditional networking, but not relying on it for quality. Even in the web browser we already can send all kinds of asynchronous requests to multiple servers in multiple datacenters, choosing the fastest response and making all kinds of decisions to where to send requests dynamically in real time.

And while I agree about overcomplicated routers and box-centric thinking in computer networks, it's pretty much impossible to change things because of the monopolistic nature of the ISP industry. They are very far from competing on the levels of quality where SDN could matter.


From the description, it sounds like the routers have been reduced to more simple muxers, where a small tag added to the packet determines which port it goes over instead of trying to make a complicated decision all in one focal point.

This has pushed the decision back from the routers to the servers, where it scales with the number of hosts handling connections and can additionally be determined per /session/ instead of per packet. That state is held where sessions already need to be stored anyway instead of being duplicated in a router where it's entirely a burden.


The speedup from traditional MPLS already comes from the simple next-hop only lookup required by the router. There's no complicated decision making going on. Something central still manages the labels, that's the complexity but that's not new either.

Google still buy and use a ton of traditional mega (tera) high bandwidth networking gear. All of which will support regular label switching. But that doesn't sound cool and exciting so they always play that down. They can spin it that way if they like, network hardware vendors don't care because they are still selling them their boxes.


You really do no understand what Google is doing or how it works. The article does a poor job explaining. It is more like a virtual circuit switch network. It enables far less cost in equipment as no need to over provision. But a key feature is it makes latency far more determinate. This was necessary to make Spanner work.


Label switching is a pretty old concept. We had it at Inmos around 1990 and it wasn't new then.


Every vendor is moving to SDN and hardware is getting more and more generalized with more functions crammed into smaller die space. Google is just using its massive cash reserves and R&D capability to beat the vendors to the punch. They probably expect network vendors to go this way, and want to eventually sell these new routers as a service to customers the way they did with their search boxes. There's also the little problem of routers facing an ever-bigger internet and not getting any more efficient, so planning for the future means having better scaling routers.


From the keyboard of a Chromebook Pixel I think I can feel where this is going. I can see a time when ChromeOS works on this internal Google network for most of what it does, running on protocols that are orders of magnitude quicker than regular 'https://' style TCP/IP networking.

There is this danger that these pieces Google work on get joined up, e.g. Android apps on your Chromebook. With Google owning the network from the server 'encapsulated' over public networks and then on Google's own home router and on to Google gadgets like Chromebooks, there is a high likely hood that some glorified 'SPDY' that hooks in at OS level is going to happen. Therefore pieces of the puzzle may not have any obvious benefit, to take a train analogy, who would build a railway through Utah? On its own it might not make sense but as part of a transcontinental railroad it was an important part of the puzzle to get done. With Google there is some of this shrewd aspect with added 'moonshot' thrown in for good measure.


It's already happening with QUIC. Google is backporting as many improvements as they can into the traditional network stack though, like HTTP/2, BBR, and TLS improvements.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: