But we already have such a slow lane. And it already made the internet less free and less useful.
It's the upload bandwidth.
Weak upload effectively killed peer to peer. File sharing is slower than it could be, and e-mail, chat, blogs… are all in the "Cloud". Very convenient, but also dangerous (insert random EFF or FSF argument here —they all apply).
With a worthy upload bandwidth, all these things could use a server at home, with many advantages for choice, control, privacy… You could argue it's impractical for a lambda user (and it is), but that's not the problem. If someone try to sell a simple server with a fantastic UX that host e-mail, blog, vlog, social network, and distributed encrypted backup, all out of the box, it would still suck because of the damn bandwidth —and firewalls in some cases. So, this business model is dead in the water, which is why it is still so dammed difficult to install one's own mail server.
You want net neutrality? Start with a neutral bandwidth. Stop treating users like consumers, and they may stop acting like ones. With any luck, it should kill YouTube, Blogger, Facebook, Twitter, Gmail, Skype… except the users will still do what these "services" offer.
In France, the cheapest fibre is Gb, symmetric. Yet they want to sell us 100Mb download, 10Mb upload. Also, many are reluctant to give us a decent IPv6: many give us a /128, instead of the recommended /48, or just the bare minimum /64. The technical reasons are long gone by now.
The whole debacle may have started for technical reasons, but I suspect now they're just addicted to centralization. Maybe tighter control of the end user leads to more money down the line?
I wholeheartedly disagree - more upload bandwidth would be nice, but it would only lessen my inconveniences, not actually solve them. The real problem is bad UX/software, fueled by underlying protocols that assume relatively stable and authoritative server hosts.
I used to run email/dns/web/etc off of Speakeasy (and college before that, and dialup before that), but I switched to Linode (like 8 years ago) and haven't looked back. It really sucks when your home server has a hardware issue/power outage/changing things around/etc, and you feel like you need to fix it ASAFP lest your emails/etc start getting bounced/etc (yes, smtp is supposed to queue. no, that doesn't alleviate the concern).
What I really want is my home server to act as the primary contact, but when that is down, Linode to serve on my behalf seamlessly, obvious to message contents. And of course I could set something like that up per-protocol (modulo incoming ports being firewalled, etc), but the more complex setup one makes for themselves, the more likely things are to just decay over time.
We're not even at the point where having a distributed file store is straightforward. The best I've found is running Unison across multiple hosts/disks, and I still find myself spending way too much time dealing with administration and overcoming its limitations (cycle-intolerant topology, lack of access levels, etc). Anything else I've seen assumes a reliable central host, constant network connection, would need to be babied in different ways, or is just not robust enough to trust.
Meanwhile with these centralized solutions, they just work out of the box. There are occasional or hidden issues like service outages, vendor dependence, lack of flexibility due to arbitrary restrictions, planned obsolescence, anti-features like ads, abdicating your computing to opaque code you don't control, supporting the destruction of the Internet (what prompted this slew of articles? Netflix setting a terrible precedent..), etc. But the effort required to initially get them working is basically nonexistent. I personally refuse to give in and support (hopefully) dead-end centralized technology. But you can't deny that their user experience is quite compelling, especially for people without preexisting sysadmin skills.
I know the UX is terrible right now (as in so unusable I don't even dare touch the server I set up myself). I know centralized solutions work out of the box, while distributed solutions barely work at all.
Of course bandwidth wouldn't solve our problems overnight. But if our ISPs were suddenly giving us the bare minimum, meaning symmetric bandwidth and a fixed public /64 IPV6 with no firewall, then at long last, we'd have a business model for distributed stuff. It would get more people working on that UX problem, and that would solve the problem.
Eventually.
That said, I don't see how we can claim a decent internet connectivity in the first place, since there is no usage to justify it. Looks like a chicken & egg situation.
Sorry for my lead in being needlessly confrontational. I do agree that asymmetric connections aren't a good thing, but disagree on the "chicken" versus "egg" as you put it. I don't think there will be any demand for greater upload bandwidth until using that bandwidth isn't an event that totally swamps your connection and requires manual coordination with other users.
Upload traffic is theoretically able to be precisely prioritized, where a long-term file upload uses any extra bandwidth but adds no latency to other traffic (besides unavoidable pktsize/linerate). But easy/standard configs provide best effort shaping even if you go out of the way so that it is able to discern interactive ssh from uploading ssh. (And nevermind download shaping, which really requires application-level involvement to get predictability).
Static IPs could help as a nice crutch, in that they would enable us to start with current server-authoritative protocols and patch/configure them to deal with one-nine uptime. But conditions here are set, and static IPs are not a panacea - I have the option for $6/mo, but don't take it. And I deal with the same fundamental undistributed service problems between my own machines on tinc.
So I don't see much point trying to lobby ISPs for larger upload (esp when it goes against their technical constraints. I do have to wonder how the world would be different if digital computers had been invented before radio, causing p2p to be dominant over broadcast), without the software to make it useful. And I think that software has to be developed with the current conditions and the understanding that most people don't want to have caring for a home server a prominent part of their lives.
(Now, maybe my tune would change if instead of 12M/1M DSL, I were on GigE where my upload throttling would be done by "the cloud" ;))
The way I see it, it's a package. The internet is supposed to be a network of peers. Keeping this property is important for a number of reasons, both technical and political.
To be a peer on the internet, you need to be able to be server as well as client. With current protocols, this means a public static IP (or range thereof), symmetric bandwidth, and no spurious restrictions. The network must also be "clean". No deep packet inspection, no transformation of payload, no NAT, no nothing. Just a dumb pipe, who treats packets equally —for some definition of "equal". And if you have difficulty handling a particular kind of data, then just install bigger pipes. Congestions should be few and far between.
Some people would go even further, arguing that you also need an Autonomous System number. I believe this is less important, though I tend to agree. People should be their own ISP if possible (hint: it's difficult). One way to do it is set up a non-profit, and be a member.
---
Now let's get more technical. The main reason why consumer upload should match consumer download is this:
Worldwide upload = Worldwide Download
(Modulo lost packets and multicast.)
> I don't think there will be any demand for greater upload bandwidth until using that bandwidth isn't an event that totally swamps your connection and requires manual coordination with other users.
Actually there's already a huge demand for upload right now: YouTube. Every time someone watches a video, Google must spend that much upload —without requiring the attention of the account owner. This makes the network unbalanced: lots and lots of data are pouring out of Google's network to the consumer networks, and little ever goes back.
This has economic repercussions. Basically, ISPs connect with each other in two ways. The first is Alice and Bob having a peering agreement: when Alice can send data to Bob, and Bob can send data to Alice, and no one charges for anything. Then you have transport ISPs, which charge for the data you send through them. That would be Eve charging Bob for transporting a packet he wants to send to Alice. (When Alice and Bob aren't directly connected.)
Normally, an ISP wouldn't charge for any data for which it is the recipient, because that's data it wants, after all. But that's not exactly the case of consumer ISPs, whose purpose is to transport the data to the consumers. So they charge the consumers themselves. Now here's the problem. If the bandwidth was evenly distributed, the ISP would send roughly as much data as it receives. Perfect for peering agreements. Instead, they keep receiving more and more data. So they're tempted to act like a transport ISP, and begin to charge for the data they receive. Except that won't fly, since their network is the destination of that data! Which is probably why they try the "middle ground" of merely slowing down data that isn't paid for, destroying any illusion of Net Neutrality in the process.
There is also a technical reason: a really peer to peer network would have the data travel shorter distances, instead of say, going back and for the capital even when two neighbours are talking to each other. That means less Tbm (TeraBytes multiplied by meters), which is ultimately cheaper.
---
> I have a static IP option for $6/mo
Your ISP is a crook. Static IP cost them nothing: most people are connected at all times anyway.
> So I don't see much point trying to lobby ISPs for larger upload (esp when it goes against their technical constraints. […])
It doesn't go against their technical constraints. If it ever does, it is because of constraints they put in place years ago with the advent of this infamously asymmetrical DSL. It's their fault, and the onus is on them to update.
---
Overall, try to step back from your own situation, and watch the big picture. This often yields different answers. (Example: if Joe Random wants more money, a way to get it is to become "more competitive". But if everyone gets more competitive, Joe Random will rank just as before.)
So, regarding the Chicken and egg problem… Sure, if you suddenly had a static IP and smoking fast upload, you may not behave any differently. But if everyone gets that, you may have a market, and it may change things. If anything, it would make overlay networks more practical, and goodness knows what innovation could come out of that.
I'm coming from a similar place and think it very important to preserve the peer-oriented nature of the Internet, so you don't have to convince me. But I disagree on the prescription, because in the long term, economics wears down regulations (and here in the good ole USSA it's short-circuited right from the start!)
I don't see the way forward being based on IP addressing (+dns) as identity, which is ultimately what you're talking about. First, the end to end principle arose out of engineering concerns, and IP does nothing to preserve data opaqueness against a network that wishes to categorize traffic. And given that there is little money in transporting commodity bits, yet some of those bits are quite valuable (work VPN session..), there is an ever-present economic incentive for discrimination.
Referencing UL=DL doesn't really make sense. Even with an ideal buildout of multi-homed homes mesh-connected through each other, there's still going to be a network "core" that has more long-haul bandwidth than the outskirts. If I wish to publish a file to many people, it makes more sense to send that data once to the core, and fan out through there (whether by a server, multicast, or some new method).
My ISP is Sonic.net - I wouldn't call them crooks, and given the competition wouldn't begrudge them an administration fee on a static IP. I said that to point out that it is not even worth $6/mo to me, and combined with their deletion of logs after two weeks, having a fixed address is actually a net-negative from my perspective.
So back to the real topic.. I'm definitely trying to analyze the big picture, and I've come to the conclusion that IP-as-identity is a complete red herring. I don't particularly see how it would encourage overlay networks, when the whole idea of an overlay network is to deprecate the underlaying network protocol to layer something better on top. Overlay networks work just fine over dynamic IPs, and only need a few underlying long-lived identities for rendezvous.
The way I see it, the real root of the problem is protocols based on authoritative servers which place undue importance on the reliability of individual hosts, and therefore their network links and administrators. As long as we're reliant on these, then the benefits of locating them closer to the core and having them cared for by a third party is going to outweigh the downsides.
>>> But we already have such a slow lane. And it already made the internet less free and less useful.
I'm pretty sure you can still get ISDN lines from your local telecom provider. I know a long time ago there was talk of binding ISDN lines to together to increase upload and download speeds. This was right around the time DSL was blowing up so I'm sure it got lost in the wave of DSL hype.
Of course compared to DSL. GigaFiber, or any of the current technologies, it's downright pathetic.
Bonded ISDN lines are called a PRI (also known as a T1). A normal ISDN BRI consists of one D-channel and two B-channels. A T1 was one D-channel and 24 B-channels. You could also bond multiple T1s together; it was effectively a long-distance serial link that operated off the telephone network.
Technically you could also bond multiple BRIs together, but pretty quickly it made more sense to just provision a PRI with some of the B-channels disabled (also known as a fractional T1).
I don't think slowing the site down really helps convey the speed difference anyways, because they aren't seeing it apples-to-apples.
Instead, I think you want to load two version of the site in to iFrames (or something) and throttle one.
This would also be a good way to show a user how crappy their connection is compared to a _real_ high speed connection: The problem isn't just that they are creating a "fast" lane and a "slow" lane, but the fact that US-based "fast" lanes are actually fairly slow to begin with.
The iframes idea is a really good one because it wouldn't require changes to the web server configuration. The throttle could simply be simulated with delayed loading using JavaScript.
How about something simpler -
Make images load slowly using JS.
A ton of popular consumer sites - facebook, instagram, reddit/imgur are hugely image based. A few lines of JS to make them slowly load (perhaps PNG artifacts for bonus points) and you could very effectively get the point across all while quickly serving a nice large banner.
Offtopic: As a google fiber customer myself, his mention of not remembering the upload speed, made me think of how we need bandwidth to get to the point where it doesn't matter. CPU speed used to be something you quoted. For the last few computers, I don't care anymore, because its simply "enough". I fear the current telco/ISPs wouldn't agree with that idea...
How about something simpler - Make images load slowly using JS.
Or how about just using a webfont? I already get a reminder of what the 'slow lane' was like when I hit a site that uses one - it does layout so I can see where the text will eventually go and then I get to wait 5 seconds for it to load and do whatever else it needs to do with its fonts.
How about preloading (or rendering) placeholder images with a short message of what is up and a timer. The timer runs down from 4 to 0 seconds and all the images get loaded.
Remember the days before PNG, before JPEG, when GIFs loaded several times over? That sure was fun, watching the image get slightly clearer and clearer and then just stop, leaving you wondering about that final level of detail.
Back then of course you'd be opening an image that big & detailed in a separate window, but now we've got some image-heavy websites with dozens of these things. Retina screens don't make things easier, either. Loading 12 pseduo-GIFS in serial would be the mind-killer. My imgur? Facebook?
Serious question: would it be evil for Google to push out an 'educational patch' to Chrome? Firefox probably wouldn't.
They didn't load several times over, they loaded progressively. It was all the same file, but as soon as one level of detail could be displayed, it would - a very sensible optimisation in dial-up days.
> How about something simpler - Make images load slowly using JS.
I do like this idea. However, as one of the minority of people who have disabled js by default, I'd hope this could be done in a way that doesn't break websites.
I don't know how to say it without being blunt, but I mean no offense, I'm just curious. I've heard of this mythical type of user (no-JS) but never heard their side of the coin before.
I dislike ad tracking, information culling, and I think that JavaScript underpins and makes possible many of the things I dislike about the modern web. We agree on that I'm sure. But so does a web browser. So does the operating system it runs on. Turning off JavaScript only protects you from the parts of the web you don't like by breaking all of the web so much that nothing runs as it's supposed to - but what's the point in browsing a shattered web?
I am a front-end dev, and I've hit the extent that CSS will allow me to style and lay out pages. I need per-element @media queries, and the ability for elements to peek inside and count their contents so I can make it fit into the space better. JavaScript lets me to do that, but I'm afraid that if you disable JavaScript from today forward you're going to run into this a lot more.
It doesn't take a whole ton of JavaScript to supercharge a CSS layout, but if you block all JS you're going to be looking at a broken page.
I use text-mode browsers on the command-line, and for checking the source of pages from the command-line so I understand when there's a time and place for restricted browsing and it's less useful than a full browser to me because of it.
Having said all that, would you mind explaining why you choose to break your exerience of the web by disabling JavaScript, and what gains you perceive from the exercise. I'm genuinely curious but can't find any of you no-JSers in person anywhere.
Allow me to list a few reasons I run with JS disabled by default:
- Poorly written JavaScript that causes my system to stall for a time, to the point where I have to kill all my browser's processes manually; more common than you'd think.
- Websites that attempt to mine cryptocurrencies using my browser with JavaScript; without my knowledge or permission.
- Then there's XSS and CSRF vulnerabilities that can leverage JavaScript to hijack active sessions.
And I am probably forgetting a few more reasons..
I do enable JavaScript for specific trusted websites, and I make temporary exceptions every now and then if necessary. But, I usually try to keep the list of exceptions short.
In regards to needing JavaScript for styling / design purposes.. Are you sure you need it? It might take you a bit longer to tweak and restructure the HTML and CSS to accomplish your design without JS, but it's well worth it.
Mining bitcoins in javascript on someone else's computer. Surely it would be quicker to get the 386 computers out of the landfill and use them than to mine bitcoins in javascript remotely in some botnet of sorts.
I am intrigued. Can you send me a URL of somewhere that will mine bitcoins on my computer?
You should probably not embed such scripts into your website(s). If it's not illegal, it's certainly super shady, and an extremely inefficient method of mining bitcoins.
I only have some second-hand, anecdotal explanations. In my freelancing days (several years ago), I had clients who insisted that their websites work flawlessly without JavaScript -- some would even insist that there was no JavaScript at all. When I asked these clients why they wanted this, the answer would always be because they have disabled JavaScript in their browser, so why would they want any JavaScript on their website?
When I asked why they disabled JavaScript, it was rare to get any sort of coherent answer. Usually they would just mumble about viruses, or occasionally about privacy, but every single one of them cited "a friend" as the original source of whatever reason they had.
So my theory is that most people disable JavaScript "just because". Reminds me a bit of the five monkeys and a ladder experiment.
I generally browse the web to read things. I want a simple document reading environment with limited power over my computer and minimisation of who is informed of my activities particularly as they cross between different sites. Javascript is not needed and is in fact detrimental to these aims in the majority of cases.
--
I use NoScript so in many cases I enable scripts but in others I don't, it depends on the value of what I expect to see and to some extent on the trust in those the scripts are being loaded from.
Sometimes the page is broken although not fatally, most Washington Post links look broken but work fine if you scroll down to where the actual article is.
Some domains are whitelisted permanently but more often I will temporarily allow .
Reasons - Privacy and control
Some sites want me to load Javascript from dozens of different domains, many of which are advertisers and trackers. I don't want to be tracked and each script loaded could be reporting almost anything about the page I am viewing back to the associated domains.
Disabling Javascript disables most of the intrusive moving/audible adverts on the web and hopefully also much of the tracking and privacy invasion that goes along with them.
Once in a while Firefox has a bug, and I have to reset its data in one environment. Almost without exception, I forget to install NoScript after that, and experiment the "fully functional" web. That experience is so disastrous that I always put NoScript back, and make notes to not forget it again (that I obviously do not follow next time).
The web with Javascript enabled is simply broken. Sites take ages to load, refuse to show you content, throw overlays all around the screen, or just stop working.
And then, there are security considerations. But as I said, I never remember them.
"[..]throw overlays all around the screen"
They've adapted. I've seen sites in the wild that do the opposite. Use CSS to hide all the content, and then use JS to enable proper viewing.
Those sites are usualy also the ones that behave worst when you enable Javascript. As a general rule, the best option is to simply not bother with them, and spend your time on some site that cares about you.
All of the web is broken by default, because it is filled with black-hats trying to collect your personally-identifiable information and steal your passwords.
If your web page doesn't work without javascript, you are no longer publishing to the entire world. You are publishing to the subset of everybody who will trust a complete stranger to execute unknown code locally on their own computer--otherwise known as the gullible and the naive.
While it is true that most website authors will not abuse that trust, they may not be entirely responsible for all elements served to visitors, as there may be a compromised ad server, or the website itself might have been subverted without the author's knowledge.
I will often at least temporarily allow scripts to run from the domain I am currently visiting, but if a page is serving scripts from 30 different domains, I may spend some time researching those domains, or simply forget about the page and go elsewhere.
I want my computer to serve my desires, not those of strangers. And I go to the web for information, not fancy layouts.
Also, I sometimes browse over a SSH shell with links, a text-mode browser, and javascript support in that has been suspended indefinitely. Some browsers are simply incapable of rendering images or executing script. You shouldn't ever be tying the core functionality of your website to a user interface, any more than you should make the business logic of software dependant on the GUI.
I have to pick just one? Windows, Android, Linux, iOS, Wii, Xbox 360, etcetera.
I don't consider Microsoft, Google, Debian, Apple, Nintendo et al to be complete strangers. I don't trust them unconditionally (hence all the rooted, modded, and jailbroken devices in my home), but I do trust that if I do discover malfeasance, that I have some well-established path to seek redress, and that they have the bank accounts, insurance policies, and reputations necessary to make me whole again.
I'm not nearly as trustless as RMS, but I am at least aware enough of the problem to be skeptical even of the software I have actually paid for, and downright paranoid towards everything else. Even the stuff I write myself could be subverted by a compromised compiler or OS. But like the two friends fleeing from the tiger, you don't have to run faster than the tiger to escape; you just have to run faster than your friend.
If someone is likely to be more damaged by breaching public trust and getting caught at it than you are likely to be damaged by trusting them when you should not, you're probably safe to trust them. But then again, even Sony can install a rootkit. You can trust, but remember to verify.
Would you run JS in a sandbox, like Chrome's? Would you run JS from a "trusted" source, like Google's web font loader? Would you run code from a "stranger" that had been vouched for by Google, like angular.js served from Google's mirror?
I turn off Javascript because it tends to be poorly written and doesn't provide any value to me. I'm primarily interested in the information part of the internet, so those limits of CSS layout just don't matter to me.
Of course there is a difference between web applications and information sharing. I don't mind Javascript for an application that needs it but I tend to prefer non-web apps anyway.
It seems odd to me how much work goes into making a web page do things that don't need to be done.
> why you choose to break your exerience of the web
Because this "experience" is not designed with me (the user) in mind. Most of the times it is created to extract value from me, not to provide it for me. Aside from webapps, which use JavaScript to drive the program doing things useful for the users, JS serves two main purposes:
- so that advertisers can better target ads for me (their ads are mostly worthless, I don't want them anyway)
- so that the website author's sales team can "convert" me easier (I want to be "converted" by value of what they offer, not by cheap "experience" tricks)
JavaScript is mostly detrimental to the main goal I browse web for: to learn, read stuff I care about and participate in meaningful discussions.
Unfortunately Internet today is one big marketing event. The real value is being drowned by all that startups and companies trying to sell me some half-baked nonsolution to my nonproblems.
It looks like a bunch of the other replies to your query have touched on most of the points I would make, but I thought it might still be worth my response.
The majority of the content I consume online doesn't _need_ js. In the sense that it's easily possible to design websites to display text and writings without needing js. So from a practical standpoint, I think js is mostly unnecessary. There are obviously services which wouldn't be possible without js (e.g., openstreetmaps), and I do un-block js for those sites.
You are of course correct that js is not the only way users can be tracked, however it is one of major components in tracking.
Looking at the EFF's panopticlick [0], with js blocked:
> Within our dataset of several million visitors, only one in 57,814 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 15.82 bits of identifying information.
With js enabled:
> Your browser fingerprint appears to be unique among the 4,104,774 tested so far. Currently, we estimate that your browser has a fingerprint that conveys at least 21.97 bits of identifying information.
So even something as simple as disabling javascript turns my online fingerprint from being unique to being one of 5% of users who have used that site. So while disabling js doesn't make one immune to tracking, it does make it more diffuclt.
In summary, for most of my use of the web, js is unnecessary and does more harm (tracking) than good (it's not needed for consuming text). I enable it on a case-by-case basis for some websites, if it's needed. But unless the js is truly critical to the presentation of the information, I won't enable it and I'll likely leave the site. Because, yes, much of the web is broken without js; but that's frequently due to (in my opinion) poor site design which doesn't fail gracefully in the absense of (usually unnecessary) js.
The reason I do it is because of shortfalls within CSS. I USE JavaScript for styling, really simple stuff, but my sites WILL be broken if you can't run 4-10 lines of JavaScript. I don't think that's outrageous to ask for from a resource standpoint, and to rule out JavaScript because some bad things can be written in it to me sounds like ignoring English because some bad or hurtful things can be spoken in it.
I'm not sure what to do about the people who turn JavaScript off, the reason I reach for it are when HTML and CSS can't do something.
I turn off JS on a blacklist basis, but I've ended up disabling it for pretty much most publishers. For me it was 100% about speed: I have a relatively high-end laptop, but with the amount of tabs I keep open and the way Chrome uses CPU in the main browser process, I would literally be waiting for 5-6 seconds before I could scroll down and start reading an article (Washington Post, New York Times, New Yorker, wired; pretty much any publication I've visited has this issue, and a handful of other kinds of sites too). Turning off JS for any site that misbehaves is an immense difference in speed, and the layout is usually not too broken (the exception being WP, where you have to scroll down past a page-length of whitespace before reaching the text).
Nothing too odd, I think: I get to feel that little bit more insulated against the wacky stuff that malicious JS can do, and if I break a site I want to see, I can make an exception for it.
It comes down to this equation: security & privacy versus fancy look and feel. The non-JS crowd picks the former.
There are many blogs that don't work at all without JS. I don't bother reading them. Those blogs might be very informative; but my security is worth more than that.
Can anyone think of any advantages to a non-neutral internet?
I can think of a few - Netflix and Skype would work better.
Most likely, we'd be able to pay for priority traffic, just like we pay for a large AWS instance. Non-priority traffic might be cheaper than current bandwidth.
It would be crazy to suggest all sites be forced to use the same size AWS instance...
Some types of traffic are bandwidth sensitive, like video. Others are cost sensitive, like Linux DVD images.
If you think that an end to neutrality will 'ruin the internet', don't you expect consumers to choose ISPs and services that don't do it?
> Can anyone think of any advantages to a non-neutral internet?
Yes. In the early days there was a competing networking technology called ATM. It provided quality of service (QoS) aspects in the protocol. So, you could prioritize packets, e.g. protocols affected by latency could be prioritized (e.g. VOIP, gaming), while those that weren't could be given a low priority (FTP, email).
The beauty of ATM was that on a relatively low bandwidth connection you could utilize all of your bandwidth and services like VOIP would still work beautifully. TCP/IP still today struggles with this.
So, as a consumer, I'd be happy to pay for QoS so that my VOIP packets had an expressway on my 2Mbps connection. However, that ship (for many) has sailed. With my largely under utilized 50Mbps connection there is no reason to pay for QoS because we've largely solved latency by throwing bandwidth at the problem.
However, with the approach that the FCC/comcast/et al are taking, I see no benefit.
But QoS at the protocol level is different than non-neutral. Non-neutral is a way for someone, not the user or application developer, to reprioritize content. You're not using Comcast VOIP? Fine, we'll slow down your Facetime chat. Don't want to use Mediacom Streaming Movies? Fine, we'll slow down Netflix.
That's what non-neutral means. QoS is different, that's applications behaving themselves. BitTorrent, for instance, led to the development of uTP (micro transport protocol). One of its nicest features for torrent users is that it will slow itself down now in response to congestion and play nice with other network-using processes on the client side.
Putting this into the underlying connection like ATM did just means that you at least have to pick a default QoS and hopefully applications/systems pick a sensible assignment for the traffic. Rather than the default being to treat every connection as equal.
I understand the difference, my point was simply acknowledging that not all traffic is equal, so I have no problem to pay extra to ensure that the traffic that is important to me is given a priority. I do that today by paying for more bandwidth than I need. Alternatively, if bandwidth were a limited resource, then I'd consider paying to shape my traffic (either by paying for a better router to apply QoS rules in my own network or pay my ISP to take care of that on my behalf).
Comcast, however, wants to flip this around, so that even if I have bandwidth to spare they seem to be purposefully slowing down traffic (or under provisioning their own bandwidth) to force the Youtube's, Netflix's of the world to pay more.
TL;DR: all data is not equal; traffic shaping is 'ok' in theory; Comcast is evil, so please god don't let them artificially create slow lanes to force those willing to pay into the fast lanes.
The problem is that most places don't have choices. I have only one option (ATT) for ISP and when I move in a couple months, I will have 2 choices (Comcast and ATT).
You are burning an artificial technical straw-man. There is no technical capacity limit at play here. Networking technology has leapfrogged the bandwidth available to consumers many times over, and to boot, it is pretty much infinitely scalable.
So when theres no problem transferring all of Netflix, Skype and BitTorrent simultaneously, why slow any of it? Sure, at times hardware fails, fiber is damaged, and ISPs can feel free to prioritize traffic at that time, we certainly have the technology to do that.
But what is certainly not ok is slowing traffic because you are not willing to invest into your connectivity, investing not even enough to actually deliver all of the meagre bandwidth (100 Mbit is 1995 vintage technology, where in the US can you even get that?) you have sold to consumers.
There is no technical capacity limit at play here.
This. The US market is ripe for disruption or regulation. Sure, if you live remotely in Arizona, it may be hard to get a fat pipe, but it seems that even most cities have outrageous prices.
I currently live in Germany. We have 150MBit downstream, 5 MBit upstream, plus phone and television for 40 Euro per month. When we lived in The Netherlands we were on 130 MBit downstream internet. Even in the stone age (2004) we had 20MBit downstream DSL for 20 Euro per month. Since downloading music and movies was legal in the Netherlands until recently, many families were saturating their connections. Netflix is not that demanding in comparison.
The current situation in the EU shows that it is possible to get high-speed low-cost internet with net neutrality.
Strikes me folks here are really complaining about a broken cable market, made up of local monopolies.
I googled the American cable market a bit, and some of what was described - the cable majors carving up the country so as not to compete, or aggressively blocking new competitors - that stuff sounds like anti-competitive practice.
Writing from a country with a functioning cable market, when I hear 'ISP will charge for X', I think 'Well I will change ISP then'. If you can't do that, I think net neutrality is the least of your problems.
There's no technical capacity limit, that's true -- but there are operational and financial constraints. Replacing a half million routers across the country to remove the previous technical constraints isn't an easy or fast thing to do: the hardware will be obsolete before the implementation is finished.
Think about everything that goes into this: they can't just pick a piece of hardware off of Amazon; they have to review proposals for companies to manufacture and support the hardware over their 10-year lifespan. They then have to train their employees on them, develop migration plans, rollback plans, schedule maintenance windows, etc. These migration plans often involve significant changes to CPE configurations, which also need to be planned, tested, implemented and trained for.
It's a huge infrastructure. It costs a lot of money to make any significant change. And you seem to be confusing Ethernet with a last-mile technology; it's more of a last hundred feet technology. A lot of the effort over the last 10 years has been spent moving the telecom-owned equipment closer to consumer homes so that faster speeds can be obtained over shorter cable runs. As the length of a cable run increases, so does interference and you have to dial down speeds as a result (this is even true of fiber, albeit to a lesser extent).
I'm painfully aware of just what lengths telcos will go to to press the last bit of (downstream) bandwidth out of the taxpayer funded land line infrastructure they have been gifted. Thats what I'm saying: they have no interest in investing. Who will wake them up and tell them that no, you can't bridge another 20 years of progress on fucking bell wire? Do we like send them postcards explaining the Shannon-Hartley theorem?
But the terrible state of last-mile technology in the field isn't even what this is about. Netflix servers and the intermediaries they peer with are not in a shed in Nebraska with data coming in over microwave. Telcos don't invest in the last mile where they would have to create actual infrastructure, they don't invest where high technology rules in the heart of data exchanges all over the world.
(Of course ethernet isn't the relevant benchmark here, but at that time it wasn't just about what you could do over a hundred feet of copper, but also at what speed systems could actually communicate.)
Traffic priority should be set by the users and the application developers, not by the ISP. Streaming video/audio should have a way to signal that they're high priority traffic (because latency is noticeable quality of service absolutely essential). BitTorrent should be able to signal that it's low priority (unless being used for streaming like Popcorn Time) since it's often used in the background as a way to get large files or file sets. HTTP traffic should be able to sit somewhere between the two, with some services announcing themselves as high priority (services that stream media over HTTP or games or whatever) and the rest announcing themselves as "Guarantee me decent latency, I need to be there in a few seconds, but under a second is overkill".
> If you think that an end to neutrality will 'ruin the internet', don't you expect consumers to choose ISPs and services that don't do it?
I have 2 wired ISPs to choose from and several wireless (well, primarily via tethering with a cellphone) ISPs. My apartment has shitty wireless reception so Verizon, Sprint and AT&T are out. That leaves Cox Cable (who repeatedly disconnected me when they meant to disconnect a neighbor, costing me two days of leave to fix their fuckup) and Windstream (a DSL provider). So if both of my providers decide to play the non-neutral game, I, the consumer, am screwed. There are no options for me in your scenario. That's the reality of the ISP situation in the majority of the US.
They'll have a monopoly on "working better", thus thake "better" with a huge grain of salt.
> Most likely, we'd be able to pay for priority traffic
You (the customer of the ISP) can do that today. The difference is that they want to sell priority to the customer, and then only deliver it if the data provider pays it too.
You forget that the customer would be forced to choose what sites they could access as pay per site. The ISPs (generally a future Comcast monopoly) want to charge both ends to connect. You want Facebook at all - pay $5 per month. Facebook, pay us $1 per customer. You want some random blog, sorry they didn't pay us. Or maybe they will connect you at such a low rate as the site is unusable. The end result is that only huge wealthy companies can be accessed. Everyone else will be so backwater they may as well use smoke signals. The effect of monopoly (at least in the US) will make it impossible for their to be an alternative. I have two choices, AT&T and TWC. Both will do this sort of thing if they can. Then what do I do?
I wonder if this will have an effect on web design? Less JS, fewer images, no web fonts. I can see a website going back to plain text. I realize that even plain text can be throttled to the point of uselessness, but would it help at all?
How about an augmented form of plain text, where a SMALL MINORITY of special undisplayed characters invoke formatting options which can be utilized by consoles not much more powerful than dumb terminals.
We could call it Hypertext . . .
Could even give rise to a "markup language", we could call it HTML . . .
Once you start going down that road it could be a slippery slope . . .
> Can anyone think of any advantages to a non-neutral internet?
Yes! People who are always crowing about how the free market will save us all and how regulation is the enemy will start bitching about their slow Netflix speeds. Some of them may even realize what a terrible idea an unregulated free market is.
> don't you expect consumers to choose ISPs and services that don't do it?
Most ISPs have a monopoly in their local area. Perhaps a state-run option would be a good alternative but it would probably eventually get completely weighed down with bureaucracy and red tape.
In other words, the free market can't save us and neither can our government. A healthy mix of the two seems to be a working solution.
Keep in mind though, the ISPs got those monopolies from the governments in those areas. Not exactly free-market.
I don't have a problem with regulating them though - As far as I'm concerned, if you've been exploiting a state-granted monopoly for the last 40 years you don't get to demand a free market when someone might pass regulations less favorable to you.
However, the real fix here is going to be removing the monopolies and getting a competitive marketplace for bandwidth.
QoS is perfectly fine if it's set by the user, not the ISP. This is a red.herring anyway, because the Netflix problem has nothing whatsoever to do with prioritization.
> Can anyone think of any advantages to a non-neutral internet?
It would give ISPs an incentive to spend the large sums of money required to upgrade their networks/infrastructure to offer significantly faster Internet speeds. I very much doubt that any amount of legislation/regulation is going to force ISPs into it otherwise.
How is it going to give them such incentive? If any, I think it will give them more incentive NOT to do so.
Why would they upgrade the infrastructure if they can just rise the prices until the demand for higher speeds either dies down or is enough at that higher price point that it will be cost-effective?
Wouldn't prioritizing packets at routers create slower queues for low priority packets?
Murphy's Law tells me there's a good chance these low priority queues will slow down exponentially at random times. Routing nightmare if you consider DDoS and what not.
Latency isn't really the problem. Latency has three causes in practice: The speed of light, router packet buffers, and (generally congestion-related) packet loss. The first isn't usually a problem and there is nothing to be done about it anyway. The second and third aren't actually latency issues at all, they're latency as a side effect of a bandwidth shortfall. The only way to actually fix that is to add more bandwidth. The best you can do otherwise is to choose who gets screwed over by the lack of bandwidth, which is exactly the thing you don't want the likes of Comcast doing.
Latency is the issue in routers when they categorize traffic, that was the idea. They don't simply prioritize packets; they age them and sort to try and ensure latency constraints. So its more complicated than priority.
> Latency is the issue in routers when they categorize traffic, that was the idea.
That's what I'm contesting.
Suppose you have a router which is receiving data to be sent over a 1Gbps link. Over a one hour period there is an average of 600Mbps of streaming video, 400Mbps HTTP traffic and 300Mbps of FTP/BitTorrent/etc. traffic. There is more data than there is link to put it in; you are screwed. The streaming video is going to stutter or degrade to lower quality, web pages will be slow to load, people waiting for downloads to finish will have to wait longer, because there is not enough bandwidth. The latency will also be poor, if router buffers are set too large and create a large queue in front of new packets (bufferbloat), but that's the least of your worries in that situation. Fixing the latency wouldn't clear things up because you still have users trying to play video with a bitrate of 6Mbps through a connection with a 4Mbps throughput. And categorizing the traffic only changes who gets screwed -- if you put downloads at the bottom of the heap then you might make the streaming video and HTTP customers happier but the customer who paid for a 6Mbps connection and would have at least been downloading at 4Mbps is now waiting for a download which getting only 0.2Mbps. That's not a solution, it's triage.
Now suppose you have the same amount of traffic on average but the link is upgraded to 2Gbps. Now there is no packet loss. You might have short bursts where the traffic level exceeds the capacity of the link, but they get smoothed out by the router's buffers, which never get full so the queue length is never more than a few milliseconds. Solving the bandwidth problem solves the latency problem.
The point is, if you're trying to fix the latency that occurs as a result of router buffers getting full or dropping packets in the core of the internet, you're rearranging the deck chairs on the Titanic. The underlying problem is that router buffers are getting full or dropping packets in the core of the internet.
This would be great and informative and effective, except that the most visible sites* have no incentive to play along with this little song and dance, as they are the ones proliferating anti-net-neutrality for their own private gain.
So it will basically just look to people like I'm running a shitty technical job serving my site, most people will think I'm stupid, they won't learn a damn thing about net neutrality or why it's important, and stop visiting my site in the process. =/
* that many people use exclusively with their internet-time, like Youtube, facebook, Netflix, etc
Are you arguing that no one saw and thus there was no impact when Wikipedia ran the anti-SOPA blackout?
Further your examples don't make sense. Youtube is owned by Google, which doesn't have a 100% support track record for net-neutrality, but is mostly supportive. Netflix is on the record as completely for net-neutrality -- they are one of the major services cited by ISPs as causing the need for an internet fast lane, which directly impacts Netflix and Netflix consumers (negatively, if that wasn't clear). I don't know about facebook off-hand, but frankly, who cares about facebook's leadership on the web? It would be great for them to join in, the exposure would be great, but I think more people distrust facebook and their support is the internet equivalent of being on the same side of an argument as the KKK.
How about instead of just slowing your sites down arbitrarily you do exactly what wikipedia did -- explain to the user what is going on, and at least force them to click through to the actual, full speed version of your site -- even better, let them see what your site would be like speed-capped and what your site is like now.
> Are you arguing that no one saw and thus there was no impact when Wikipedia ran the anti-SOPA blackout?
Wikipedia was a notable exception, sure, but OP's got a point: does this have a hope on the Alexa topsites?
Comcast's monopoly seems to be the problem. If a Comcast user in NY can't watch netflix past 5PM, wouldn't they, I don't know, look for a better ISP? After all, their ISP just ain't getting it done. The only missing piece is the market competitor that is supposed to balance out Comcast's dickheadedness. Why, after decades of 'free market', do we have customers that are stuck? I think this is worth mentioning to users.
> explain to the user what is going on, and at least force them to click through to the actual, full speed version of your site
That sounds like the plan. I just can't think who's all on board.
They want to maximize profits. So keeping costs down is important, but it's even more important to keep competitors out of the game.
Currently they have no serious competitors, so they see the Comcast-tax as a pure added cost. But if this practice goes big and becomes an added cost to the market, barriers to enter it become higher, which keeps their position entrenched.
Of course this would require a lot of formalisation (and quantization, estimating numbers, times and potential competition and market evolution) but still, most of the same questions are going through Netflix executives' heads.
>most of the same questions are going through Netflix executives' heads.
Conjecture. Contrary to populist belief, not every corporation is a soulless, amoral entity. Netflix has yet to demonstrate in even the smallest way that they are for anything but complete neutrality.
Indeed, it's hugely to their benefit to be for net neutrality these days. A non-neutral internet would drive up costs for them and/or their customers for the data connection. If it drives up their costs it eats into their profit margin and they either soak it or raise prices. If it raises customer costs as a general increase on broadband cost, then customers have less money to spend on Netflix and other services. If it drives up customer costs because they now have to pay a Netflix prioritization fee, then they might not choose Netflix at all when Mediacom/TWC/Cox/whoever is offering a cheaper streaming media service.
I am working with Fight for the Future on a JavaScript code snippet (called 'Slow Lane') to simulate a slow loading process + rip the FCC a new one. This project needs to launch next week to make the maximum impact and we need help to make it superb! If anyone with web skills is interested in helping out, please email team@fightforthefuture.org or me directly at jeff@rubbingalcoholic.com ...
I don't know that this is the solution (or even that there's a problem). I know my opinion isn't very popular on HN on this issue; but I continue to share it because I feel it's important that people understand the view from the other side of the fence. I expect to get downvoted because people disagree with me, but then magical internet points never really mattered much to me.
I think all this talk of the "slow lane" is a bit tinfoil-hat. Companies like Comcast have no interest in slowing down web site traffic; in fact they do a lot of QoS to make web browsing faster and more responsive. This type of traffic (DNS, HTTP requests, online gaming, etc.) tends to get put in a high-priority QoS class: the data transmitted is often small and it greatly improves the user's experience. ISPs have a huge incentive to make this type of traffic as responsive as possible; and given the low bandwidth requirements, this should definitely be possible. It makes their service "feel" faster to the customer and it's the right thing to do for the customer.
Video streaming services are another story. They don't need to be responsive because they pre-cache a lot of data; in fact the right thing to do from a technical perspective is QoS them into the basement. Video can handle this; it's high bandwidth and low latency. The thing is, streaming video accounts for about 80% of peak Internet traffic. A small percentage of users (~30%) are starting to overload the ISP's last-mile networks with video traffic.
The types of high-bandwidth scenarios that the ISPs will be pushing the "fast lane" on are going to be almost exclusively video streaming services. Video streamers have had to pay CDNs for years anyway if they wanted their videos to stream quickly. The idea is that because these services have such a disproportionate effect on bandwidth usage, they need to contribute economically to avoid a tragedy of the commons [1] situation. Your average website or app that's not pulling 1.5+ mbit/s over an extended period of time is likely going to be fast regardless because it's in the ISPs best interests to make it that way.
>Companies like Comcast have no interest in slowing down web site traffic
Yeah? How about money?
As a comms engineer, I too think about the technical implications of this.
But you have to remember that the people who push for the abolition of net neutrality are mainly the finances guys, i.e the ones responsible for bringing as much money as possible to the company. And when you put yourself in their shoes, all of a sudden you get dreams of Comcast turning the Internet into the same kind of market it has in cable television. And you very quickly forget about QoS, bandwidth, latency and the job of "delivering the bits" and just think in terms of profits. You wouldn't know what most of those terms mean anyway...
Well, in short, any web site with any money to pay the ISPs is likely already running on a CDN that's been paying the ISPs for a decade. The portion of that market that's not yet monetized is likely very small. Besides, slowing down general-purpose web traffic makes it look like your service just totally sucks ass.
But more generally, slow-loading websites make the ISP's service look shitty. There's almost no cost for them to just QoS the low-bandwidth stuff up and it makes people think their service is better.
> But more generally, slow-loading websites make the ISP's service look shitty
Exactly, so people will just leave that ISP and sign up with one that doesn't throttle traffic. Except most people have only one ISP in their area. Yay free market!
Even in a truly free market, most areas would only have one ISP. See how Verizon and AT&T have dramatically scaled back or stopped their FiOS/U-Verse rollout plans. If a company with easements and existing infrastructure (and thus lower costs) has decided it can't justify the investment, how would anyone else?
I was saying "Yay free market" as in "a truly free market is a bad idea as seen by local ISP monopolies," not as in "our free market has failed us and needs to be freer."
In a truly free market, there would be one company that owns everything, including infrastructure. Not a world I'd like to live in.
Exactly! Netflix is already paying for the upload bandwidth they're using, and Comcast customers are already paying for their download bandwidth. That should be the end of it. The problem is, because of services like Netflix, more people are actually using the bandwidth they're paying for. Comcast takes issue with this because they can no longer massively oversell their infrastructure.
>Companies like Comcast have no interest in slowing down web site traffic
You can't think of a few sites and services that TV cable providers would like to slow down? The issue isn't that the general web will get slower, but rather that cable companies will become the arbiters of which services survive and prosper.
The other part that virtually no one talks about is how this would be regulated. Are we going to have "internet inspectors" from the FCC doing periodic surprise checks of every ISP? Will ISPs be required to have a neutrality license with full historical audit logs?
This reminds me of when the TSA took over airport screening after 9/11, over the belief that only a government agency could screen passengers securely. The net effect was longer lines, more expensive travel, less rights for flyers (ie either you get a body scan or a pat-down, and no more liquids on flights), and in general less happy travelers.
Same way the NSA would be regulated against spying on the general population: remove money from the equation.
Think about it, the only reason Comcast is doing all this is so they can charge premium fees on certain services. If you don't allow them to do so (which is very easy to enforce, as customers can report such pricing policies), they have no incentive to be against net neutrality.
This is why the current status quo has worked for so long: ISPs have no way to legally make profits out of "premium" traffic, so they (generally) don't apply outrageous QoS measures. Money is the only incentive, and removing that incentive solves the problem without the need for active policing.
"This is why the current status quo has worked for so long: ISPs have no way to legally make profits out of "premium" traffic, so they (generally) don't apply outrageous QoS measures. Money is the only incentive, and removing that incentive solves the problem without the need for active policing."
And because they weren't allowed to charge extra for "premium" traffic, they essentially switched over to the over-subscription model in order to make profit. The tighter you squeeze, the stronger it oozes out the sides. And never in the place or way you wish it would!!
Packets get dropped when links are saturated as they often are during peak hours. I know a network admin would think "well the links should never be saturated" but the economics of that just don't work in the consumer ISP space. You aim for a target level of service that means certain portions of the network will be saturated, and use QoS to ensure things like DNS and HTTP remain responsive.
Regardless, the point is that QoS exists to keep non-video traffic from getting trampled out by video traffic. Video traffic is such a disproportionately large amount of total Internet traffic that virtually the only services that would be significantly impacted by being in the "slow lane" are video services. The links wouldn't even be saturated in the first place without video traffic.
EDIT: Also wanted to add that congestion is almost guaranteed with adaptive bit rate streaming. Netflix will use as much bandwidth as is available up to like 9 mbit/s.
Because they didn't have 50% of their users trying to view 6 mbit/s video streams.
EDIT: Just wanted to clarify that 50% is a number I pulled out of my ass. The general point is that the percentage of users who stream video online on a regular basis is increasing more quickly than the economics/logistics allow the ISPs to perform network upgrades.
> The general point is that the percentage of users who stream video online on a regular basis is increasing more quickly than the economics/logistics allow the ISPs to perform network upgrades.
This is another citation needed point. Bandwidth consumption has always increased over time, as has the performance per dollar of routing equipment.
Well, the goal of a properly designed transport protocol is to transfer X bytes in 0 seconds. TCP doesn't quite achieve that, but that is its goal. This pretty much dictates †hat for any reasonable sized download, somewhere on the path is congested. If that congestion isn't at the server (it usually isn't), and isn't in the backbone (it usually isn't), then it's either your consumer ISP or your computer (it usually isn't). ISP's tend to oversubscribe backhaul to make the economics work, so if you're not getting your full linkspeed, then it's likely the backhaul that's congested, and everyone else is seeing congestion too.
There are exceptions of course. Older OSes have ridiculously small TCP receive windows, and older TCP congestion control algorithms have trouble filling the pipe. But these shouldn't really be the main problem these days. It doesn't apply so much for streaming either, because modern streaming protocols such as MPEG-DASH will select a lower bitrate stream if they sense congestion.
I think simulated slowness is probably better than actual slowness. Something like a javascript file you could include that would hide all the content on the page and show each element at the same speed you'd see with a throttled network. That way you could still display a message about why it's happening, etc. and it's much easier for the average developer to implement.
Disclosure: I'm a Feld fanboy. I think it's a brilliant idea. However, to actually coordinate it would take some doing. Blackouts are relatively easy technically. Slowdowns, not so much, given that most companies that give a crap work diligently to improve the user experience and speed things up.
The financial impact could potentially be huge (as I'm sure blackout was as well). But what's even tougher about this idea is getting companies like Google and Netflix in on the "simulation". They're most likely not going to be on the slow lane. Now let's say some we get a whole bunch of startups who would get penalized by the slow lane to do this for a day. Would it make as much of a difference to the masses? The initial effects will be subtle, but the long term impact on economic activity and startups in general could be devastating.
That being said... if this movement is to take hold we'll need some web server plugins and some JS magic to help it happen at a massive scale. Maybe some of my fellow geeks at the bigCo's can convince them to join in for a day too, for the sake of all of us.
Seems to me that this protest would have a far higher impact than the blackout - or at least a far higher annoyance factor.
The services it would impact most are high-bandwidth such as content streaming, but of course those may be reluctant to participate as it could realistically ruin their goodwill and quickly decimate their user base.
I really like this idea except for one thing; it would show users a comparison of slow (what they get now) and really slow. But not what they would get if they lived in, say, Seoul.
The problem is, most people don't realize that, all things considered, their internet really isn't as fast as it could be. They think it is because they have no basis for comparison. And the ISP's bank on this... literally.
That's what burns me. We already pay ISPs to deliver the data that we request. They should take our money and improve their infrastructure. Instead, they want to get paid again, by e.g. Netflix, and the result would still be basically the same shitty third-world service we get in the US today.
Many of us already get a Slow Lane Demo every day during "prime time" evening hours.
I'm thinking this is a false advertising issue. they say they provide X speed and throttle it down. we need to get the attorney generals in on this issue. they should only be able to advertise the slowest speed they throttle down to.
"Algorithmically, all sites could slow themselves down dramatically, demonstrating what performance might look like over a 1/1 pipe. Or even a 0.5/0.5 pipe."
If the former, it would make precisely no difference to me. Openreach (UK) provide adsl over copper via an older phone exchange in my immediate area. This is one mile from the centre of a major city. The local authority actually took Openreach to court and lost.
To put this into perspective the government is about to spend around UKP60 Billions over 20 years to provide a high speed train link to London so the journey time drops from 1h30 to 45 min...
I can't wait till I have to buy packages based on the type of content I want to consume : $10/month email $20/web $30/music $60/ video. Bonus Package - Add $5 for gaming $20 for large file download.
Late to the discussion about the general issue; could someone please help me understand what's the difference between the "Fast Lane"/"Slow Lane" and good old QoS?
One quick, effective and probably not entirely illegal manner to raise the issue would simply be putting a sign on top of any comcast user's pageview saying something like, "Since your internet provider has forced us to pay X amount of money per GB to provide our service to your specific account, and we currently haven't figured out a cost scheme to account for that, your movie viewing experience is being adjusted to fit our current costs. (hyperlink)Tell Comcast how you feel about this."
A class action lawsuit is overdue against these internet providers over selling. I feel like I'm on a 128k ISDN most days, it's literally that slow. When you call in, they say, well, "Up to 3Mps" doesn't mean you will get it. ISPs are charging us by possible speeds, they need to deliver or discount when they can't. if you promise me about 10-15 widgets an hour for $50 and all you can consistently deliver for a month is 2-3 widgets. You should not demand $50.
And, really, users who have bought based on price per Mb download speed without any other metric are partly to blame for this. If users had been honestly buying based on bandwidth used we would not have ISPs offering "unlimited (until you hit the limits)" internet connectivity. (This is not to excuse the ISPs for their sleazy misleading advertising).
Give people a price per GB and then tell them how many GB they download each month. Price that GB sensibly and route traffic fairly.
I think a better demo would be a game where you're playing the ISP trying to maximize the profits from captive neighborhoods of customers. There could be activities such as offering a toll gate speed-up packages for popular websites, and placing your former lobbyist as a regulatory officer, and buying up more ISPs to get more captive neighborhoods.
One main problem with any attempt by a site to slow down its service is that all similar/competitor sites must slow down equally. Otherwise we get the game of one site provider not slowing down (or slowing down as much) because of the incentive: if they are faster, they generate more visits than competitors.
two can play this game. Netflix could serve B&W versions of movies to the clients of non-neutral providers. The official line could be something along "this movie have been optimized for the low bandwith connections by removing colors and stereo sound; contact your congressmen"
Best idea I've heard so far is to intentionally slow traffic only for known government IP blocks. If we can convince Google, Wikipedia, and other big sites to do this, Washington would definitely notice. I'm thinking of writing a script that site owners could use to do just this.
Why stop at the slow lane? There is nothing preventing ISP gatekeepers from wholeheartedly denying access.
Slowness is just the beginning -- the end-goal is making the internet into multiple competing walled gardens where users are treated as silos that require permission to access. The global reach of the internet is at risk here.
Imagine if Netflix wasn't accessible to Comcast users from the beginning because they wouldn't pay the toll. Would it have thrived and grown as quickly as it did, or would it have died in it's infancy?
What about Skype? Skype stepped on the toes of incumbent ISPs' long-distance revenue streams, I wouldn't put it past AT&T to purposefully degrade the quality of Skype calls or even outright deny them from happening. Prior to being bought out by Microsoft, would they have had the revenue to pay for access to users? Would Microsoft have even bought Skype at all?
We see this already with blackout outs in cable TV for various sports games ect.
In fact, it took 4 months to resolve a dispute between DirecTV and the Weather Channel [0], which affected 20 Million people. There was some outrage, but not enough to for DirecTV to relent. Instead, the content provider was forced to change [1].
There are lessons to be learned here for other content providers, like Netflix. My worry is that the gatekeepers still hold too much power and are strong arming content providers, using consumers as pawns.
And not just sports. In the mid-90s I lived in south Georgia and was a Star Trek fan, at the time that meant watching UPN. The local UPN station was blocked during prime time for our area (literally local, it was broadcast from that town) in order for the FL (Talahassee?) station to retain its viewership. Only one problem, our cable offered only the local UPN station, not the FL one, so the blackout meant a 100% blackout of UPN during primetime for our town (short of rabbit ears, I wasn't that big a fan). Later, the two ABC affiliates ended up at odds with each other when the closer one ended up blocked by the one out of Atlanta, again during prime time.
Blackouts occur as a negotiation between local channels and the regional broadcasters (gate keepers), in general, such that local channels get priority over regional broadcasters.
Here is an example of a gatekeeper forcing blackouts the other way around [0]. The content producer was required to enforce a blackout on local channels as part of the deal with the gate keeper. Net result? Consumers lose.
That would be politically trickier to pull off. It's much easier to say, "We're not censoring anything - we're just not giving them 'premium' speed.", and then redefine the status quo as the new 'premium'[0].
For Netflix, it doesn't matter - having the equivalent of dial-up speed is essentially the same as being blocked entirely, since their service depends on reasonably fast download speed.
To illustrate another scenario, imagine two political candidates, one whose platform includes treating ISPs as a public utility, and one who doesn't. It would be very easy to slow the former's campaign website to a crawl (e.g., 30s or more load time per page). While this wouldn't prevent people from accessing the website, or him from getting his message out, it would seriously hamper it in very noticeable ways.
This way, the ISP isn't censoring any political candidates (that would be bad!). They're "just" not giving him the "premium" speed.
[0] This kind of redefinition happens all the time - notice that five years ago, mobile data plans were unlimited and texting was expensive for the carriers to offer. Suddenly, texting "became cheap" for them to offer, and data became limited. It's not that the costs dramatically changed (SMS always had literally zero marginal cost, since it piggybacks off of the packets already being sent), but fee structures and cost structures are oftentimes very different.
Texting never cost anything at all for the carriers to offer. It uses a sideband data stream that's always present whether it carries a payload or not.
Right. Paying per-packet when there are only connection costs is a way for carriers to create a cash cow.
That's why I fear pay-per-view models on the internet. Once I'm connected it isn't about the packets, yet lobbyists trained by cable TV are trying to inject that model where it makes no sense. Except of course to the scalping bastards trying to get rich selling packets.
People say that a lot, because they've heard it somewhere, but do you actually have the facts to back it up? I'd have to check my books to be sure, but I think you're talking about SMS delivery over the SACCH. That's an associated channel, meaning it goes along with a traffic channel, so that would apply if you were in a call. If you were not in a call, it would have to go over a standalone control channel SDCCH -- in other words, that channel would be being specifically used for the SMS transfer.
And, of course, there's the cost of running the SMSC.
Not really. I and a lot of Netflix customers would be happy to wait a few minutes for it to buffer enough of the video to begin showing it in HD. I'm not given that option though. "Luckily" I choose to live in an area with an ethical service provider so I get HD Netflix at no extra charge to me or Netflix.
Imagine if Netflix wasn't accessible to Comcast users from the beginning because they wouldn't pay the toll. Would it have thrived and grown as quickly as it did, or would it have died in it's infancy?
If I was building a bandwidth-intensive startup in the US at the moment, I'd be seriously looking at using port-hopping p2p as my distribution mechanism with some randomisation of the payload to make profiling/identification as difficult as possible.
p2p is a terrible distribution mechanism; it's pretty flaky and it destroys battery life on mobile devices which is where most of the growth in the tech sector is right now. Also there are issues with DRM in p2p. Yeah I know, DRM isn't popular, but video content owners insist on it and distributors can't do anything about that.
I never actually understood this. DRM is already largely ineffective, but how would P2P distribution make it any more ineffective? It would be trivial to distribute encrypted content using P2P and then DRM the decryption.
Modern DRM schemes use one-time keys. E.g. you can pull down a video stream, but you have to respond with your session ID and a login token which they then verify against subscriber information.
And the whole point of DRM is to allow content providers to enforce the restrictions they place on content distribution contracts. For example, if there is a flag that says "this content can only be played on cell-phones and tablets" they want to ensure that any player that can play the content honors that restriction and doesn't allow output via HDMI.
Naturally some people will be able to defeat them, but they're not meant to be ironclad. They are effective for the majority of people out there who aren't hackers/nerds.
p2p can work well, world of Warcraft being a prime example. DRM is also no problem for an easy solution just have a few important bits in another file which is downloaded separately. The real issue is rock solid multithreaded low level networking code is way beyond the skill level of the average ruby on rails code monkey.
They effectively already do this by selling highly asymmetric connections. They can't do much more than they already are because you have to have enough upload bandwidth to acknowledge all the packets you receive when downloading. And too many customers (and large companies) would complain to Congress if uploading a picture to InstaGram or a clip to YouTube took several minutes or cost a lot of money.
That would impact the ability of users to access corporate VPNs and work from home. My hope is that that's a large enough user base that they can't limit it too much.
I agree with this 100%. When my internet gets slow I'm wasting so much time, because I'm still thinking it will finally work for me, whereas if its not working at all, I can go and do something else that is productive and doesn't require internet access.
All of that has already happened. They banned BitTorrent where and when they can get away with it and they block Skype on particular mobile data contracts.
> The global reach of the internet is at risk here.
The Internet has up til now had a global reach and that may be at risk with recent developments.
Individual services such as Netflix, Youtube or Spotify have not had a global reach and may never have as long as the current structure for licensing and distribution of content persists.
Internet based distribution of content has always been under attack or limited by the traditional content distributors such as record labels, movie and tv studios and publishers. What is new is the attack from the ISPs from the other side.
There's a lot not to like about the FCC's proposal, but one of its virtues is that it does include, as a guiding principle, "no blocking." http://www.fcc.gov/guides/open-internet So, under whatever rules it comes up with, it seems quite unlikely that your assertion, "There is nothing preventing ISP gatekeepers from wholeheartedly denying access," will be true.
It is possible, of course, that this rule could corrode over time, or that the slow-lane might be so slow that it serves as a de-facto block. But I think we'll need to see the actual proposed rule before we can assess that risk realistically.
I think blocking might already be possible. Comcast is known for intermittent service problems, it would probably be pretty easy for them to give "intermittent service problems" to anyone trying to access a blocked site. They couldn't stop them from accessing it forever, but they could make it really really annoying.
That's true. But actually, on that score, it seems like the new rules are likely to be an improvement. (!!!) Now, if you think that Comcast is blocking content in the guise of "service problems," the most you can do, realistically, is call customer service. But if they do that with an explicit "no blocking" rule in place, they're just asking to get sued--either by a consumer, a state communications commission, or the FCC itself.
It doesn't matter if its no-blocking, if they allow their current service to degrade to the point that it is unusable. That's exactly the same as blocking for everything, except a few specific services that paid for the 'fast lane'. And its already happening
Well, it will be like anything else - the first hit is free, then you have to pay. Netflix wouldn't have been killed in its infancy; it would have been killed as it became a toddler and started to walk and to run.
So your startup will be going like gangbusters for the first couple of months, then you'll get a letter noting that your "abnormal usage patterns" require "an agreement" or else you "will be subject to rate-limiting".
And the fee will be set as close your estimated profits as the ISP can come...
Hell, why stop at Netflix? It would be a ballsy move, but I'd applaud Google if they took one day and directed all searches from, say, Comcast to a static page explaining the dangers of "slow lanes" and "fast lanes" in the internet.
Much more effective would be for operators to ban ISPs. Imagine the internet telling specific users that they have to get off comcast or whatever in order to use their service for a particular day out of every month. Do it maddox style with a nice middle finger. Make users want to switch instead of just pretend like there's no alternatives.
>Much more effective would be for operators to ban ISPs.
I heartily disagree with this approach. Not only does it harken back to the days of Prodigy, AOL, CompuServe and the like where each service had its own content, but it's very anti-internet.
>Make users want to switch
That's great if there are alternatives to switch to.
Yes, I could technically switch to dialup. It is technically internet. It's not going to allow me to do any work or any of my hobbies, however. So I don't consider it a valid option.
Yes, I could also try out some kind of dish provider. If I wanted to chop down some trees. Besides, those also don't work for me (almost no upload bandwidth, and horrible latency issues).
>instead of just pretend like there's no alternatives.
I live in an area where Comcast literally is my only option for high-speed internet. I'm not pretending.
> I live in an area where Comcast literally is my only option for high-speed internet. I'm not pretending.
IIRC most of the country lives in areas where there are only 1 or 2 broadband options. The non-cable operator is likely to be a Baby Bell that's also spent heavily to lobby the FCC to kill net neutrality and so wouldn't be an effective protest switch.
Yes, this is true. The worst part is that most of the baby bell solutions are slow-ass DSL.
For instance, in most of Metro-Atlanta, you can get Comcast or AT&T UVerse (AT&T being the name of what was a Baby Bell, Cingular). UVerse is simply slow-as-shit DSL and is awful. (Fun fact: AT&T tried to sell and promote UVerse as Fiber in the mid-2000s. My parents have fiber in their home but no way to get a fiber provider -- AT&T was over-selling them on a "fiber" solution that was literally just copper DSL wires. The whole telecom industry is full of crooks).
In New York City, you basically have one provider. If you're very lucky and live in an area Verizon (another former baby bell) also services, you can get Fiber -- and that's awesome -- but the zoning for this stuff is often street by street. The building across the street from my apartment can get Verizon. My building can't -- for whatever reason. Verizon did tell me they could probably rig me access if I could get them access to the basement -- but I'm not the building owner and I don't have time to deal with what happens when the line gets cut accidentally.
I'm very fortunate that my Internet/TV provider is relatively sane (Cablevision) -- if I lived three blocks further away, I'd be stuck in TWC hell.
What most consumers also don't know is that in many areas, the cable provider can be chosen by the property management company (if you're in an apartment complex) or the condo association. So what happens is that operators will "bid" on that area - and the lowest bid wins. The problem is, even if you live in a Comcast or Verizon or whatever area, you still can't get that service. You're required to be with whoever your property management or co-op board chose. I actually didn't buy a condo that was in a great location and had a great floorplan because of their choice of ISP/cable provider.
The whole system is fucked. It really is. And the net neutrality and fast-lane aspects are only a small part of a much more broken and corrupt system.
That said, just because we can't fix the whole system -- and we can't -- doesn't mean we can't put pressure on companies to not fuck customers over even more, by making access-agreements for content. The system is already not in our favor -- no reason to make it even worse.
> I heartily disagree with this approach. Not only does it harken back to the days of Prodigy, AOL, CompuServe and the like where each service had its own content, but it's very anti-internet.
Anti-internet to ban an ISP that is trying to decimate the internet? How does that logically make any sense?
Lets say each website / server admin blacklisted various ISPs from their servers because they did something the admin didn't like. Maybe they noticed throttling to/from their site, or spam, or just had such a bad experience once as their customer that they now considered that ISP to be "bad for the internet".
What would we end up with? Most of the internet not working for anybody. Friends sending you links that you can't get to because you're on different ISPs. Needing to buy service at more than one ISP to "provide coverage". If it happened on a large enough scale, it would unravel what we know as the internet.
> Make users want to switch instead of just pretend like there's no alternatives.
I absolutely want to switch and will as soon as an alternative is available. In my entire life, I've never lived in a place that had anything other than Comcast. And I've always lived in relatively dense areas.
The alternatives are to go without and enjoy the fresh air, pay up for the data cap and live with the low bandwidth (it sucks but it's not as sucky as Comcast), setup a mesh network, lobby your local government, move.
Let me guess, if you all were naturalists, you would all live in the heart of New York and complain to your neighbors (but not your city) all day about the lack of green.
I take this to mean that you're setting up your own ISP which is going to service my area and provide better rates, faster connections, and better service than Comcast, so that I have an alternative for internet?
That's great, I'm really excited to hear that! Let me know what number I can call to sign up!
Why should a competitor charge better rates and have faster connections in order for it to be an alternative? Are you looking for the cheapest provider or the most ethical?
So you see no/little difference between an unethical ISP that's cheaper than a more expensive ethical one?
I'm not sure what you think this debate is about.
I'd love to support a more ethical ISP, but as a practical matter, internet is already expensive and spending more for the same or less isn't really a workable option. Downgrading my connection is also not a great option; if I'm forced to choose between a fast expensive connection and a cheap slow connection, then yeah; it's great that Net Neutrality is a thing, but now I'm stuck in the shitty situation I was trying to prevent anyways.
When money's the deciding factor, decisions are easy. There's more to life than money sometimes, though.
I could shop at Walmart and have an extra 20 bucks for my internet bill. Instead, I choose the slower internet with the ethical provider so that I can shop not at Walmart.
Not to say I'm perfect by any means, but this is just an example of the decisions I try to make every day. These decisions are hard sometimes.
There's plenty more to life than money: there's a wedding I'm planning, a wife I'm going to support while she goes to school, pets that I take care of, artistic organizations I support, etc.
Giving up on all of my hobbies that require internet, and having my fiancee to the same, is not practical. Paying more for internet is not practical. This is not an unreasonable position to take, nor does it represent some tragic moral failure on my part.
In any case, it's a thought experiment that's not worth defending myself further, because there are no alternatives to Comcast where I live.
I'm also seeing something strange on my iPhone. It looks like there's something off with the antialiasing, and the text isn't selectable. It looks like a scaled up image, though I don't think it is.
It's the upload bandwidth.
Weak upload effectively killed peer to peer. File sharing is slower than it could be, and e-mail, chat, blogs… are all in the "Cloud". Very convenient, but also dangerous (insert random EFF or FSF argument here —they all apply).
With a worthy upload bandwidth, all these things could use a server at home, with many advantages for choice, control, privacy… You could argue it's impractical for a lambda user (and it is), but that's not the problem. If someone try to sell a simple server with a fantastic UX that host e-mail, blog, vlog, social network, and distributed encrypted backup, all out of the box, it would still suck because of the damn bandwidth —and firewalls in some cases. So, this business model is dead in the water, which is why it is still so dammed difficult to install one's own mail server.
You want net neutrality? Start with a neutral bandwidth. Stop treating users like consumers, and they may stop acting like ones. With any luck, it should kill YouTube, Blogger, Facebook, Twitter, Gmail, Skype… except the users will still do what these "services" offer.