Hacker News new | past | comments | ask | show | jobs | submit login
Comcast and T-Mobile upgrade everyone to unlimited data for next 60 days (arstechnica.com)
185 points by Elof on March 13, 2020 | hide | past | favorite | 71 comments



It's too bad that Comcast chose to have inferior upload speeds, about 6Mbps on standard plans in my region, no matter whether it's 100Mbps down or 1,000Mbps down. When we're talking work from home, online learning, video conferencing, upload speed matters. Comcast tries to bury the upload number, I'm kind of surprised to see it printed in this article, and when I realized the reality I thought there must be some mistake. There is no option for a higher upload speed at less than $300/mo, which pays for a full fiber. It's great to have an option but too bad Comcast chose to make their service inflexibly inferior in this way.


Just got fiber deployed to my small city here in France, and I now pay 40 euros a month for 900 down / 500 up, with a free phone line and IPTV access. We're barely 8000 people and surrounded by farmers fields for kilometers around.

And I'm still jealous of eastern countries like romania where they get better for cheaper.

I don't know how such a high development country as the US can tolerate those ridiculous price, bandwith and data caps.


This is a bit of an old trope. "Size matters". I'm not defending the American ISP abominations like interference in allowing municipal fiber to be built, and denying access to poles so new competitors can't access public infrastructure to build a competing service without unreasonable barriers to entry. All this anti-competitive crap... Right on, I'm with you.

However... America is ENORMOUS compared to any European state... and mostly unoccupied. And our government only recognizes the decaying Telco system as a public utility that we all need access to...

Nevermind. I don't understand either. It's just fully stupid. The size will make it cost more... Not recognizing it's worth it... For that there are no excuses.


Europe is over 3x as population dense as the usa. Europe also doesn't have American suburbs, so even small European towns are very population dense within the city limits. America is too spread out for the capex of fiber to be worth it outside of major cities.


That's kind of standard with DOCSIS. You can't have a lot of upload speed due to the available bandwidth for upload channels.


The balance of upload and download is configurable.


Yes, by lowering the download speed, at least on DOCSIS 3.0.

DOCSIS 3.0 supports up to 8 upload channels for a total speed of 216 Mbps (on 6.4 MHz per channel). That's shared for all users on the CMTS, which could be a lot, depending on the setup. For comparison, on EuroDOCSIS 3.0, with 32 channels the total download speed is 1600 Mbps for all users, and on DOCSIS 3.0 it's 1216 Mbps.

DOCSIS 3.1 allows for multiple OFDMA upload channels which can allow for up to 2 Gbps shared upload speed capacity, and DOCSIS 4.0 allows for symmetric 10/10 Gbps shared capacity with an undefined split DS:US (you can have the same number of channels). It uses extended RF spectrum in the cable and also needs new amplifiers and splitters.


> Yes, by lowering the download speed, at least on DOCSIS 3.0.

Which would imply that they should be able to offer a 100/100 connection for the same price as a 200/6 connection and have 6Mbps to spare.


No, the maximum upload is fixed and independent of the download speed. I meant they can offer 6/6 Mbps, or 12/6 Mbps, for a 1:1 or 2:1 ratio. Basically as a joke.


Even DOCSIS 3.0 supports up to 124 Mbps upload speeds using 4 channels:

https://www.multicominc.com/training/technical-resources/doc...

(Other sources quote it as 200Mbps? But either is a whole lot more than 6.)


As I have said here: https://news.ycombinator.com/item?id=22572536 , DOCSIS 3.0 supports 216 Mbps usable upload capacity with 8 upstream channels, halved to 108 Mbps with 4 upstream channels. That speed is, however, shared with ALL users on a CMTS, which could be dozens or a hundred, since the coaxial infrastructure is based on a shared medium where all devices connect to one physical cable that leads to the CMTS and share the available RF bandwidth.

You could sell 1 user 100 Mbps upload with 4 channels, and have one user on a CMTS. You can also sell 6 Mbps upload to 36 users, guaranteeing them 100% of the capacity, or to 72 users, guaranteeing them 50% of the capacity.

It is not feasible to sell packages with more than 50 Mbps upload on most coaxial networks (even HFC, if there are many users, i.e. more than one apartment building connected to a node) as users will not be able to use the full capacity, and the speed will inevitably drop to the sub-10 Mbps level, but the user will be unsatisfied as they are paying for and expecting higher speeds.


The maximum download speed for DOCSIS 3.0 is 1216 Mbps, and that's only if you use 32 channels. If you were selling 200/6 connections to 72 users then you would be only guaranteeing them each ~8% of the download capacity (and yet 50% of the upload capacity, using 8 upstream channels).

To have the same level of oversubscription for upload as download, you would give them all 35Mbps upload with the 200Mbps download. But you don't even have to do that, because who says they all need the same plan? Sell a quarter of them 100Mbps up connections and the others still get 17Mbps up.

No way to justify why it should only be 6 up if it can be 200 down. (And if they're offering 200Mbps down without that much oversubscription by using DOCSIS 3.1 or 4.0, even less excuse for 6Mbps up.)


There is only 37 MHz available for upstream, from 5 MHz to 42 MHz (basic upstream frequncy allocation). There is 890 MHz avilable for downstream, from 112 MHz to 1002 MHz.

https://www.excentis.com/blog/differences-between-us-docsis-...

Therefore, given that there is 24x more bandwidth in the downstream direction compared to upstream, it would make sense to offer 20:1 or higher ratios of DS:US speed to customers. Of course, if your HFC network is "good", i.e. has less users, you can sell higher speeds, or just grossly over-sell the service and "hope" not everyone uses the upload at the same time.

Also, a correction: any given cable modem can support up to 32 bonded downstream channels for ~1200 Mbps per CPE, while the cable itself has capacity for ~130 channels, which is theoretically ~4900 Mbps of usable capacity, if there are no TV channels on the cable. However, there is still about ~200 Mbps of upload bandwidth on the cable for all customers, which is limited by the number of channels in the return path RF range. You can assign multiple customers to a single channel, but you can also segment the channels so that all customers get some channels out of the available range, but not exactly the same ones, which would increase the overall capacity of the network.


> Therefore, given that there is 24x more bandwidth in the downstream direction compared to upstream, it would make sense to offer 20:1 or higher ratios of DS:US speed to customers.

And yet 200/6 is >33:1, much less 1000/6 at 167:1.

Moreover, to even get to 24x you're using the full theoretical allocation for downstream but not for upstream, which would be 40MHz from 5-85MHz (extended frequency allocation) according to your link. Which would then be ~11x, implying (on average) 200/18 and 1000/90.

Also, who decided it should be impossible to allocate more channels to upload? I realize that's more a question of "who designed this crap" than "why aren't they doing this now" but when the parties responsible are still Comcast et al, that makes it hard to want to give them a pass for it. (Though credit where credit is due, DOCSIS 4.0 supports symmetrical uploads.)


Not if you already have a legacy cable television network it isn’t.


Easy for them to do because it costs them nothing. VPNs use so little bandwidth. What a joke to get some good PR, and sad they are exploiting this pandemic.


Would you prefer that the companies involved did nothing instead of taking this costless action?


Yes as maybe it’d draw attention to the absurdity of the limits vs advertised speeds.

It’s similar to the security theater at airports. I personally dislike the entire TSA precheck concept specifically because it works so well. The lack of pain for frequent travelers has kept the entire system from being replaced with what’s now the TSA precheck process.


“Yes” is a purity test answer.


They did it for self preservation. To not repeat the situation where firemen run into Verizon limits which triggered a bad PR.


We should all be thankful for these scraps we have been generously given.


No, I don't think you understand how VPNs work. They're as fast as the slowest link, and the slowest processor of the VPN client and the VPN server.


I think the point they were trying to make (and it's one I share based upon talking to operators in a few of the largest broadband networks in the US about their usage) is that the WFH crowd is not moving the needle in terms of driving traffic. Except for the 10th which was Call of Duty updates that did congest networks starting in the morning the peak is still post 5PM. Now with schools being closed that can certainly change and lead to what many broadband networks spot during spring break (peak coming sooner).


seems like there might be increases in total traffic [1] from more streaming. Not sure if Netflix has released numbers, though that wouldn't be as much of a burden to ISPs given their colocated 'open connect'

https://blog.cloudflare.com/covid-19-impacts-on-internet-tra...


what large b2c business isn't exploiting this pandemic


Any word on people already paying the $50 for no data-cap on Comcast? Should I remove it for 2 months or did they say they'd waive it?


I'm sure they'll gladly take your money. I got a notification that I was approaching my limit just yesterday.


but... but... don't they enforce this data cap to protect their infrastructure? I don't get it! What's gonna happen to their infrastructure if they do that??!!


A lot of this going on in Spain right now. Higher data caps for mobile connections (landlines have no data caps here), and some ISPs that have a TV service have opened it up for free.


Imagine what the world world be like if all hotspots were open all the time.



Less safe is less true than it was back when that answer was written: a LOT of traffic has gone TLS-only and the operating systems most people use are secure by default. Yes, there’s always a chance of an exploit but these days I’d be more worried about what links people are clicking on rather than where they’re sitting.

Congestion is also an interesting challenge: in some cases that’s a problem (imagine having an AP next to a school) but since the hardware limits have gone to substantially we’re probably at a threshold point where geographic separation is enough to avoid that problem for a large fraction of places. The public library has open WiFi but there are only so many people who are going to camp out there.


> in some cases that’s a problem (imagine having an AP next to a school)

Really it's the opposite. You put an AP next to a school and it gets the whole school's traffic off the cell tower and onto a local high bandwidth coax/fiber network. You save a ton of wireless spectrum by having low power wifi APs everywhere instead of needing high power cell towers. (And obviously in that case the school itself would be the best candidate to be operating the open APs instead of or in addition to whoever lives next door.)

You don't really get a tragedy of the commons either, because the range is so short. You could go to any given place and find open wireless, but the best way to get a good signal in your own home is to have your own AP. The exception would be high density housing where you're actually close enough to share, but then you do just that -- have all the neighbors chip in to get a really fast connection and share it. You can keep it open to the public as long as the other neighbors pay their share, which is cheaper for each of them than paying for a whole connection themselves as would happen if they defect and cause you to stop offering it. Or, more realistically, in those situations HOAs or landlords could install the AP and pass on the cost as fees/rent.


> Really it's the opposite. You put an AP next to a school and it gets the whole school's traffic off the cell tower and onto a local high bandwidth coax/fiber network. You save a ton of wireless spectrum by having low power wifi APs everywhere instead of needing high power cell towers.

Note that I was talking about a single AP - not a planned large rollout - and just the point that there are a few high-density applications where you actually have to worry about the number of simultaneous users.


Hotspot range is small, and competing networks slow things down. You do not get a tragedy of the commons by making the networks open.

And "open" and "unencrypted" being the same thing is a historical artifact. You can have each user encrypted with a separate key, and WPA3 in fact does this.


WPA3 adoption is still limited, and transition mode will be a necessity on any public access network for many years. “Historical artefact”, eh, like COBOL.

The commons in question is not the last mile.


Many things that are in current use and without replacements are historical artifacts, too.

The prompt was not about using the exact software we already have, it was about a world where things were mildly different. In that world, with open hotspots given priority, I would expect the security problem to have been solved long ago.


The hypothetical did not specify that we also live in a fantasy wonderland where any problem one cares to raise is magically already solved.


It takes a fantasy wonderland for increased usage to means a basic feature might get coded sooner? Or to have a few more good crypto people working on the standard?

This is a feature that already exists and will be in most devices relatively soon. Using it in a hypothetical is not magic. Especially because we could sidestep your whining just by setting the hypothetical in the year 2023, because the exact timing wasn't the point of it!


Oh now it's a future fantasy wonderland where everyone's a genius!

This isn't even a hypothetical anymore, it's just a load of techbro fantasising.



Is xfinitywifi open yet? I'm still getting a payment prompt.


And I'm still seeing a big fat XXX/1024GB usage meter when I log into my AT&T account.


Welcome fellow Americans to the normal citizens that don't even know the words "data cap". Wish you stay longer then 2 months in this club. Personally I am member of this club for over 2 decades (as is my entire country btw).


Good for you. We know the government-sponsored ISP monopolies/duopolies here suck. It's incredibly naive to think that this fact is lost on us.


Obligatory 'we've had unlimited 4g data for $8/month in India since 2016' comment.

Feels good to be first world at something


Those comments are gold:

>AT&T was willing to tap into their Strategic Packet Reserve yesterday and now Comcast has to follow suit. Let those extremely rare and finite in number packets flow!


[flagged]


Chill and maybe take it slower so you understand things better. The previous post was simply quoting a (rather amusing) joke that appeared in the comments.


I hope I don't sound stupid but could you explain the joke?

Edit: a word


The joke is a reference to the strategic petroleum reservers in the US:

"The Strategic Petroleum Reserve (SPR) is an emergency fuel storage of petroleum maintained underground in Louisiana and Texas by the United States Department of Energy (DOE). It is the largest emergency supply in the world, with the capacity to hold up to 727 million barrels (115,600,000 m3)."[1]

The comment is satirically suggesting the telecom industry has similar reservers for such emergencies. These carriers are notoriously stingy and known for their data caps.

[1] https://en.wikipedia.org/wiki/Strategic_Petroleum_Reserve_(U...


While the US's Strategic Petroleum Reserve is the most well known, Canada's Strategic Maple Syrup Reserve is way more amusing and fascination especially after they had a massive theft.

It was a fluid situation but they caught the sticky fingered thieves and delivered sweet sweet justice.

Ref: https://business.time.com/2012/12/24/why-does-canada-have-a-...


That's not a national reserve though as it's maintained by and only serves the interests of a cartel of private producers in Quebec.



The joke is implying that the reason ISPs employ data caps is because packets are a finite resource.

They're not, and the practice makes about as much sense as buying your internet by the minute.


Or the opposite, that available capacity is a limited resource, which "necessitates" data caps to maintain performance during peak periods.

It's still illogical, just not as illogical as claiming a communication medium offers unlimited bandwidth.


ATT is mostly saturated in urban areas particularly, making 'unlimited for all' a worse proposition.


That wasn’t the joke... this was about data caps on home internet.


[flagged]


So sanctimonious. The most. More than any other.


They had me at 'systems'.


A lot of people moaning about how the data caps are unnecessary don't seem to understand how bandwidth works. If you get fiber or high-speed cable to your home, yes, you may have a buttload of potential bandwidth. If everyone in a large metro area tried to use all 1Gbps of their bandwidth at once, connections for many of them would crawl to about the speed of a 48.8 modem.

There simply isn't enough backbone capacity for everyone to use all potential bandwidth all the time. But besides just not having enough raw capacity, the closer you get to reaching capacity, the more knock-on effects from buffer overruns and collisions and retries and latency and all sorts of shit start to cause connections to slow further. In order to keep speeds faster near capacity, you are forced to use traffic shaping to artificially squeeze capacity in order to make it not seem dog-slow, and falling into an unusable tailspin.

The caps are there to keep people within practical, usable limits, to prevent knock-on effects on edges of the network more vulnerable to bandwidth problems. To reduce that possibility they'd have to invest more money in unprofitable sections of infrastructure. Charging you more for bandwidth is effectively a way for them to not invest, because if they did invest, they would subsequently charge you more money to cover it. The caps basically artificially lower your own bill by getting you to choose to use less data. It's the choice of "do we piss them off with higher prices or worse service?"

Do they want to charge you more? Of course. Do they know most people won't use more than 1TB of data per month? Of course; they have trending usage metrics, they do a calculation, and this is what works to balance what people want with what they need, and how the provider can afford to pay for maintaining it all.

If they weren't absolutely enormous multi-headed-hydra conglomerates, service would be cheaper, better quality, and faster. But they're enormous, and as such they are inefficient, and as such, very expensive for what you get. If you want better service, lobby your local government to make municipal internet legal, because local private providers will never be able to compete.


I don't think you understand what oversubscription means, tiered network topologies' links increase in performance nearer to the IX, and that someone, usually the large ISP themselves, often owns the network all the way to the IX. Cable internet providers in the US get away with extreme oversubscription in certain low avg bandwidth areas and don't publish these details as required in other countries.


>I don't think you understand what oversubscription means, tiered network topologies' links increase in performance nearer to the IX, and that someone, usually the large ISP themselves, often owns the network all the way to the IX.

Getting to the IX doesn't mean you're home free, though. Your transit providers are also oversubscribed and links further down the chain could get congested.


In this case, Comcast is a nearly-Tier-1 transit provider that spends almost $0 on transit outside of interconnect/switchport costs. This argument doesn't really apply. In fact, companies pay them for transit to connect/peer because they can't afford to not.

>https://arstechnica.com/tech-policy/2014/05/comcast-is-the-o...

>Comcast argued that it could be considered a Tier 1 itself, as less than one percent of its traffic requires transit.

>As Comcast's market power continued to increase and consumers had less choice, they actually started demanding payments for connectivity. A larger Comcast will be able to demand even greater payments.

As for T-Mobile/ATT, they are Tier 1 providers who again get transit for free. Spectrum is the only resource that is scarce in that scenario.

The same rules don't apply to the larger ISPs. This market is not fair.


Parent comment is talking about link utilization and you’re talking about transit fees. You’re on a different wavelength (pun intended)

Bandwidth is finite. Links can become congested.


A data cap has almost nothing to do with congestion.

Network congestion is a lot like road traffic, in that it requires the participation of others. A data cap is like trying to reduce traffic by saying that each driver can go only so many miles. If all of a person's data usage is at off-peak times, the data cap is completely useless.


You're right, it doesn't. However, that doesn't mean it doesn't achieve its intended goal. If users use less of their internet to avoid hitting the bandwidth cap, it stands to reason that congestion would decrease because overall network usage is down. Of course, it's not perfect as it wrongly penalizes some edge cases. For instance, someone who downloads/uploads terabytes of data, but only on off-peak hours (overnight).

That said, I don't see how the competing solutions are better. You could do QoS, but that gets flak because of net neutrality. You could divide the available bandwidth evenly across all users, but that would penalize infrequent users. A gamer who doesn't use much bandwidth most days, but has to download huge patches every month would get the same speed as someone who watches 4K streams every night.


Throttle speeds after you've reached a very high threshold, maybe only during peak hours to avoid unnecessary underutilization. Why is that not one of your options?

This is what T-Mobile does AFAIK (I'm a subscriber but a light user) https://www.t-mobile.com/offers/mydatausage


I like the idea of throttling speeds during peak times only, and only after a certain amount of daily data use. That way neither low data users or offpeak users are penalised.


We already have that through congestion control!


This applies for the shared medium of legacy coax, but not the "backbone" in general. Essentially Comcast/Time Warner are camping on the aging asset of the cable TV network, extracting as much as they can until it's obsoleted by fiber (municipal, or competitive, depending on population density).

And yes, if you want to fix this, support municipal fiber. 1Gbps symmetric, no caps, no throttling, low ping. Likely for less than you're paying with the incumbents' ridiculous fees and price-jacking/contract games.


Comcast is fiber to the CMTS then its coax the rest of the way (last mile). In my neighborhood they terminate about 4-6 customers per CMTS. BTW, Comcast offers me the lowest ping both Verizon and AT&T backhaul to a city 250mi away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: