Actually, UDP was "un-designed" by me and others.By this I mean that
UDP was the final expression of a process that today we would call
"factoring" an overcomplex design. Originally, the ARPANET end-to-end
protocol NCP was a "kitchen sink" oriented toward providing remote
teletype-centric access using the "telnet" protocol and the "FTP"
protocol to remote machines over a packet network.
A group of us, interested in a mix of real-time telephony, local area
networks, distributed operating systems, and communications security,
argued for several years for a datagram based network, rather than a
virtual circuit based network. The group involved me, John Schoch and
Yogen Dalal of Xerox PARC, Danny Cohen of ISI (now at Caltech, I think),
and Steve Crocker, with Jon Postel as a supporter, and Vint Cerf and Bob
Kahn as neutral referees.
UDP was actually "designed" in 30 minutes on a blackboard when we decided
pull the original TCP protocol apart into TCP and IP, and created UDP on
top of IP as an alternative for multiplexing and demultiplexing IP
datagrams inside a host among the various host processes or tasks. But it
was a placeholder that enabled all the non-virtual-circuit protocols since
then to be invented, including encapsulation, RTP, DNS, ..., without having
to negotiate for permission either to define a new protocol or to extend
TCP by adding "features".
Exactly. For those of us who like to play around with protocols over the real Internet, availability of UDP as basically a proxy IP protocol is a godsend. Well what really matters is that a few uses of UDP over the internet are well established (DNS, VOIP etc.) and that makes it hard for ISP's and middle-boxes to block arbitrary UDP ports. And that makes UDP the perfect conduit to invent custom protocols.
Yes, this is a joke, but it's a ha-ha-only-serious kind of joke.
A rhetorical question: if other protocols are better at this or that, what forces have made TCP the only transport protocol of note for so long?
Well, in TCP, window scaling needs to be OS-global to avoid flapping. This means TCP lives in the kernel. And this means, if you want a protocol with any property similar to window scaling, it also needs to live in the kernel.
Which is to say, if you want your protocol to do anything that requires machine-level resource pooling, it has to be machine-global, not app-local.
This is true trivially if your protocol wants to live in layer-4; anything peer to TCP or UDP has to have its own kernel driver in every major OS, and in every major firewall/load-balancer/NAT appliance.
But even when you build on top of UDP on "layer 4.5", you're still under the same constraint. Even if you're not in the kernel, you still need a single-instance-per-machine daemon or somesuch. You still need to install something as root. The 1993-era user story of "my ISP just told me to enable something called NetBEUI!?" still applies.
The reason much of this will be decreasingly relevant?
Unikernels.
If your app has guaranteed 1:1 ownership of a "machine", then your app can speak whatever wire protocols it likes; an app-level resource pool is a global resource pool.
Unikernels talking to other unikernels can easily make use of new layer-4 protocols. That's exciting!
Combined with the coming wave of "cloudlets" (small VMs paired to an individual customer of a device manufacturer or ISP, where the customer's mobile devices and home network use the cloudlet as a featureful VPN proxy), these new layer-4 protocols can be tunneled out to customers, too, avoiding any and all intervening muck.
The next 10 years of networking will be interesting!
"what forces have made TCP the only transport protocol of note for so long?"
Well, another one is that rather a lot of things block anything that isn't TCP, DNS, UDP, and anything else that isn't the bare minimum for "the web" to work. They'll even go so far as to pick out which types of ICMP messages to pass through.
The internet at large has been sitting on SCTP for a very long time, because nobody expects SCTP to be able to get through firewalls. (On that note, perhaps the solution is to reformulate SCTP in terms of UDP, as that's what everybody else seems to be doing.)
UDP is quickly becoming the base protocol for "I don't want TCP but I do need something that gets through firewalls". Sort of like how so many things float on port 80 or HTTP that don't really need to, but it's what can get through firewalls and such.
UDP is quickly becoming the base protocol for "I don't want TCP but I do need something that gets through firewalls".
Exactly right. No need to invent a new protocol number in the IP headers. UDP is the perfect vehicle for custom protocols. And once you start experimenting with a somewhat intelligent custom protocol, it's amazing how easy it is to beat even well tuned TCP stacks on average over a large number of users. This is especially true, by large large margins, over mobile networks.
Why does window scaling need to be OS-global? That would imply that window scaling will also fail when you have several machines on the same network, since they won't be doing anything special to collaborate on window sizing.
Conversely, any hints and clues to window sizes that are provided by network packets can be read from a user process and so don't need to reside in the kernel either.
If multiple processes running on a single physical machine can't avoid this "window flapping" problem without cooperating, then multiple unikernels running in VMs on a single physical machine also can't. VMs are not magic.
Rather, the need for cooperation actually extends to groups of physical machines sharing a network connection too. Indeed, that's the key insight underlying objectivist mouse-based congestion control:
I might not be reading this correctly, but in the absurd extreme, you can keep extending this argument farther and farther out beyond a single machine to a LAN and onto the wider internet. In that version of the argument, all the nodes on the internet need to have the global view of network state on every other node.
The controlling principle behind almost all congestion control algorithms is that a single node can derive the properties of the path between itself and the other end by using the state of the transfer visible only on that one node. Once you reject that principle, any algorithm you come up with will quickly become unscalable.
My guess is that the OS kernel has a better idea of the total bandwidth available than each individual process. If two processes decide to increase their bandwidth at the same time, and both of them would miss a packet falling back. I'm guessing the flapping in this case is processes increasing and decreasing their bandwidth because of no central control of the data rates.
the TCP stack is controlled entirely by the kernel. so it handles things like requesting dropped packets and rate limiting.(which relates directly to "packets in flight")
with UDP, that layer of ACK, ordering and rate limiting is gone.
if you want to send a million packets a second to 56k modem, sure the kernel isnt going to stop you.
Whilst it might be possible to have per TCP connection windowing parameters, I doubt it'd be very efficient.
While, again, this is a fun spin on TCP - QUIC rides on UDP. Interestingly enough I pulled the stats from my home network for the past 30 days and I had roughly 300MB and 9k QUIC sessions (Chrome defaults to QUIC for the majority of Google services) via my firewall. If you want to see QUIC in action pop this into your Chrome bar with a Google endpoint open: chrome://net-internals/#quic
Compared to HTTP things like QUIC make perfect sense - especially when you consider all the RTT wasted on pipelined connections to the same site. You're only opening all those sockets because they're not multiplexed in any manner - your browser wants a JPG, grab the JPG. Your browser wants another, do it again...
So while this is light hearted, there's some validity to next gen web protocols that are more adapted to how the data is consumed.
You should read the design document and specicifaction rationale, it has more details than a blog post. Considering on the first page, in bold font it describes QUIC as "Multiplexed Stream Transport over UDP" it should only be compared to TCP+TLS in operation - but it helps to solve the multiplexing problem for anything that decides to take advantage of it directly. The majority of applications using TCP+TLS as a transport today have the same problem and open many sockets during operation.
Agreed SPDY and HTTP/2 have multiplexing built in - but so does QUIC.
"This is a working document, for group discussion and editing, which we expect to evolve into a somewhat fleshed out design document. The expectation is that we will flesh out a design for a tunneling protocol, running atop UDP, which can multiplex a large number of streams between two endpoints (a client, which initiates the overall connection, and a server). Each stream may, for example, be nearly equivalent to an independent TCP connection."
Edit:
And this...
"Pairs of IP addresses and sockets are finite resources. Today, too many distinct connections are routinely made between clients and servers, utilizing a multitude of sockets, and often carrying redundant information. A multiplexed transport has the potential for unifying the traffic and reducing the port utilization count. A multiplexed transport can unify reporting and responses to channel characteristics (packet loss, etc.), and also allow higher level application protocols (such as SPDY) to reduce or compress redundant data transmissions (such as headers). In the presence of layer-3 load-balancers, a multiplexed transport has the potential to allow the different data flows, coming and going to a client, to be served on a single server."
"Well, in TCP, window scaling needs to be OS-global to avoid flapping. This means TCP lives in the kernel. And this means, if you want a protocol with any property similar to window scaling, it also needs to live in the kernel."
Sorry but this is a huge assertion with absolutely no supporting evidence or argument. What exactly do you mean by "flapping"? Why do we need to avoid it? And why does a protocol need to live in the kernel to avoid flapping?
I suspect those assertions are based on an assumption that congestion control gets done in a cooperative manner across connections. (I'm assuming the "window" here is congestion window.) I've never seen or heard of a TCP stack that somehow takes into account all TCP flows in a system. As a matter of fact, even if that was the case, it would be too limited a view of the responsibilities of TCP congestion control.
The congestion control algorithms are working on a global picture of congestion... as in across the entire network path between the two communicating nodes. Various cc algorithms (reno, vegas, bic, cubic etc.) are using their own techniques to probe the network path between the two endpoints (either using packet drops or round-trip delay increases as a signal of congestion). As such, the concern of congestion control extends beyond just the one node where the algorithm happens to be running. But the important point here is that each node on the network uses only the signals available locally at that node. And for the purposes of this argument, you can consider each of the TCP endpoints on a single machines as an independent node.
I don't see any convincing case why two separate instances of these state machines can't be running independently in kernel and user space. Since ultimately, if they're competing for bandwidth over a common, narrow part of the pipe, they'd each use their own congestion signals to arrive at the same conclusion (assuming the protocols are designed for fairness).
> I've never seen or heard of a TCP stack that somehow takes into account all TCP flows in a system.
Taking into account all flows is indeed not useful. But there's some useful middle ground between "all flows" and "each flow completely separately".
At work we've got a transparent proxy with an integrated userspace TCP stack. When it's installed in a mobile operator's network, all of that operator's subscriber traffic will be going through it. Now, for each subscriber all the flows between the terminal and the proxy will be in the same "congestion domain"; they're guaranteed to be competing over the same network resources (whether it's backhaul or radio capacity). And in fact for the scarcest resource of the radio capacity each subscriber is essentially in their own congestion domain due to the way the radio base stations scheduling algorithms work. Having the custom TCP stack treat all of a single user's flows as a single unit rather than individually is a big win.
(The original claim that TCP must in the kernel for "window scaling" doesn't make any sense to me, no idea of what it could be referring to).
Hmm... interesting insight from the core network. I guess another reason to use a UDP (encrypted) protocol would be to prevent such middle-box shenanigans ;-)
Aeron from Martin Thompson of mechanical sympathy fame uses a custom, reliable protocol over UDP to achieve much lower latency than is possible with TCP solutions. There seems to be a growing awareness that one-size fits all protocols like TCP don't necessarily offer the best tradeoffs, even when you need reliability.
https://github.com/real-logic/Aeron
TCP has an advantage over UDP which hasn't been mentioned yet in this thread: it effectively prevents source IP spoofing. This is a huge issue for some UDP-based protocols, since it allows an attacker to send a packet with a spoofed source IP address. The response is then sent to that IP address. Protocols like DNS and NTP, which send large answers for comparatively small requests, can be misused for DDoS attacks this way (amplification).
QUIC, Google's experimental TCP replacement (at least for web traffic), does in fact prevent IP spoofing. It uses token authentication instead of a handshake like TCP does.
This is only an advantage of TCP over raw UDP per se. Of course, the idea is that you don't use raw UDP, but you build some session layer over it. The UDP server won't reply to a spoofed client because the packet doesn't validate. For instance, the session could be encrypted (with proper integrity checks) and the spoofer doesn't have the session key.
In fact, UDP servers typically handle a large number of clients with a single socket. So each `recv` system call on the UDP socket pulls a message from a different client. The source IP and port number could be used to do the basic matching of the message to the client, but then subject to more checks.
This can end up being more robust than TCP. TCP can be subject to IP spoofing, and because it doesn't have the crypto, this spoofing can interfere with the session (a form of DDoS): e.g. inject bytes into the TCP stream, which then screw up the stream so that it has to be scrapped and reconnected. A UDP based protocol can robustly drop garbage, so that the session is not affected (except for the network and CPU usage of these spoofed packets).
This is a solved problem with BCP38. By filtering ingress traffic that does not originate from the networks they claim to be from handshakeless protocols are possible and abuse is prevented.
A quick read of Wiki tells me that BCP 38 was introduced by RFC 2827 in May 2000. But it's still not a "solved problem" since many ISPs and companies don't implement it.
How do you get more widespread adoption? It's been 15 years.
That and TCP can send more the 64k Bytes at a time.
UDP is really fast, multicast! and when you get a message its all there (not a stream) . For large messages breaking them/ reassembling them into chunks is a pain.. By the time you do all the processing/ handshaking you end up back at TCP.
Interestingly the size limit is around 64k and varries slightly between Solaris, hpux and linux.
The "it's all there" part is important, and sometimes not being a stream is a big advantage.
When you write a protocol using TCP you inevitably have to send the "content length" as part of the protocol, and then you have to account for spoofing of that number and other ways an attacker can create a denial of service, and also account for breaks in transmission. It's a pain in the ass for anything where you'd want to send small messages.
I really do like that when you get a UDP message, its the whole thing is nice. UDP is awesomely easy, and I honestly liked it better as long as the messages are small.
(although I'm old enough to remember whole computers with 1 UDP packet worth:64K bytes of memory, that isn't alot today)
The loop and keep reading of tcp does gets tiresome, and if the sender goes down broken pipe...Memory allocation for the read...
We used a lot udp and udp multicast, and it was great to be able to fire up a diagnosing tool to join the multicast group and listen in. Though in multicast the OS networking stack handles keeping your subscription, with no intervention from the software.
This is not new. This is an attempt to make something that has existed for decades seem new, hip, and edgy. Prefixing "no" to existing traditional technologies does not make something new or cool.
The only people who would buy into this are people who have no experience with networking, and thus this would only affect their probable usage of libraries down the road with the wrong intentions.
Hell, I don't even know who this is really suppose to address. My best guess is people with very little knowledge of network protocols and their usages.
I don't think they are. Other than the first paragraph, with its self-deprecation and references to Node.js and NoSQL, which seems intended to head-off the derision such a hipsterish manifesto would rightly incur, the rest seems entirely serious.
The very last link in their "are you serious?" section is the sentence "Or maybe you have some clue about how the internet actually works, and you don’t need a manifesto to tell you what you already know" with a link to an article titled "TCP doesn't suck, and all the proposed bufferbloat fixes are identical." I read the whole thing as a joke, and I don't see anything that suggests otherwise anywhere in it.
Seriously, how can you read "just as the Reactive Manifesto reminded us that new branding can give a youthful glow to decades-old ideas" as anything but funny?
I'm a NoSQL developer and I've been watching protocols like WebRTC and CoAP mature over the years. It's fun for me to see someone shine a spotlight on the area, even if it isn't exactly new.
UDP can do things TCP can't do, I think it's worth exploring.
Holy shit this trolled me hard. I was raging reading about how people were too dumb to understand how to handle TCP failures but were apparently smart enough to re-implement re-ordering and error recovery in their own manner.
I couldn't agree more. TCP is wonderful when you need it, but you very often don't need it (and pay a heavy price for features you don't need).
I wouldn't want to write HTTP over UDP, but neither would I want to send telemetry data ('{"timestamp":343821902, "value": 43}') over TCP. UDP is brilliant for systems where you're more concerned about tracking the current state of things than accurately replaying the last n set of transactions.
However, QUIC provides a lot of the features TCP has but very fine-tuned for HTTP.
This is fine, HTTP is probably the largest layer 7 protocol on the internet, but if you were to design a layer 7 protocol that needed a lot of the features that TCP provides it would be quite dump to reimplement them yourself on top of UDP.
you care about the data, you want it to arrive in order, and you need it arrive. You want to be notified if there is a problem at the other end. You want the data sent to match exactly the data delivered.
File transfer, simple protocols, advanced critical protocols. Anything where the dev doesn't want to think about how to account for packet loss. Its already done for you in a standard interface.
Why you want UDP:
You want the latest packet of data, and you want it now. it doesn't matter if you lose a few packets. Telephony is the classic example. if you loose a frame its not the end of the world. Streaming video is another, Use FEC to cope with loss.
Tedious real world analogy:
When you send a letter or a parcel, you have two options: Send it recorded, and pay slightly more per go, but not have the worry about phoning up the recipient to see if they received it.
However if the letter is of little worth (like a flyer or something similar) then the notification and or guarantee that it has/will arrive is a pointless cost.
I'm intimately familiar with cases where you actually would prefer to use UDP. My point is most of the time these cases aren't what you're dealing with.
In addition to reliability, TCP provides congestion control which provides approximate fairness of bandwidth utilization per connection. Fairness here meaning given a limited pipe of size W each connection gets a W/N slice of the pipe's available bandwidth. I don't know that I'd like to live on an internet where every hipster made their own decision about how to handle congestion control, see also congestive collapse.
>every hipster made their own decision about how to handle congestion control
Unless they had access to a tier 1/2 network provider it'd be mostly harmless I suspect. They'd spam their local gateway. "Look I can send 300k packets a second over standard DSL"
That's completely dependent on the application. SNMP over TCP would be a net loss because 1) it doesn't buy you anything as SNMP doesn't have any requirements that TCP meets above what UDP provides, 2) TCP would be exceedingly expensive, requiring connection buildup and teardown, ACKs, etc., and 3) it's way easier to implement UDP than TCP, so dumb devices basically need to only construct a single Ethernet frame to make it work.
Many, many applications deal much better with loss than they do with delayed retries. You don't want a VOIP app to perfectly recreate what the other part said 30 seconds ago, games usually only care about current state, and so on. In many of those cases, TCP is the wrong choice but a lot of people use it out of familiarity.
Remember, UDP came after TCP. One isn't "better" than the other and they serve different requirements.
Interesting. I wouldn't want to shove my base monitoring data over UDP. Without my base monitoring, our product devs are blind and things go to shit. I can't risk that.
And, with a bunch of per-site monitoring frontends with batching, compression and deduplication, thrift/protobuf and long-running connections, the TCP overhead is meh.
If your network is so about to fall apart under load, or so broken due to network issues, or your servers are so overloaded that you start seeing appreciable packet loss with UDP internally at a site, then chances are the last thing you want is loading your system even more and slowing things down by retransmitting samples that are obsolete in a couple of seconds anyway....
Serious exceptions, sure, retry. But serious exceptions are rare. What's not rare is sampled memory usage, cpu usage etc..
Sure, you can't cope without monitoring. But you can absolutely cope if 1 out of 10 of each of those status updates doesn't get to you.
This is the completely standard connection-oriented vs connectionless link tradeoff.
"yet another hipster technology manifesto" so good content. I guess it is a good summary what is wrong with the tech industry (lots of "solutions" are abused, built for different era, etc.) and also bashes a bit on technology like node.js but not too much. Entertaining read.
I honestly can't make out if the author is trying to be serious or sarcastic. For instance:
"We follow in their Chukka-booted footsteps by challenging the comforts of the ubiquitous Transmission Control Protocol and recognizing the well-deserved renaissance of artisanal protocols built on UDP. We didn’t start this fire, we’re just calling it what it is: the NoTCP movement"
I'd admit to being too dumb to parse the exact sentiment behind that paragraph. Though it does sound somewhat like sarcasm to me.
Then in the rest of the paragraphs, he actually goes on to make a pretty good case of why one might want to get rid of TCP (especially from under the complex beast that HTTP/2.0 is with it's own flow-control, multiplexing etc.)
For what it's worth, I wrote my own manifesto against using TCP in all cases a few weeks ago. But this one is geared very specifically towards the needs of mobile native apps.
One serious argument for using a connectionless protocol is mobile. The reason is that mobile networks have a MAC/RLC layer (talking 3G/4G here, e.g. LTE) and those layers have ARQ (in other words retrans). At times, those re-trans can spike (sudden Signal drop for example). That will blow the RTO on TCP and you get needless TCP retrans. This is where you have to competing retrans mechanisms flapping. The obvious fix would be for the mobile operators to terminate the TCP flow at the base station and use the Hybrid ARQ mechanism, although that could happen eventually, but would mean a lot of re-design for them i.e. cost. So if your app is going to be on mobile, it might perform better using a connectionless protocol depending on the situation.
> Sign the manifesto: http://notcp.io . If you can figure out a way to sign it using UDP, let us know.
It's pretty clear they know they are being ridiculous.
That said, it is nice to have reminders that there are alternatives to the overwhelmingly popular default. Kids these days grow up using TCP so much they become devs who can't conceive of whole categories of software that don't happen to line up with the tradeoffs designed into TCP. Heck, seems like most people have a hard time remembering that there's more to the Internet than HTTP.
UDP while is very simple from design point of view is actually very complicated to use if you plan to do something more than just send few packets.
The issue is primarily, that you don't have a feedback loop that tells you the rate at which you should send the data. Also badly written applications that use UDP can create congestion collapse in a network (no congestion control/avoidance).
This article has a ring of truth. With HTTP/2 gaining traction I cannot help but wonder if a lot problems could be solved by going down a level of complexity instead of stacking more layers on top of TCP.
For example why not allow UDP from Javascript in the browser with CORS like protections to limit DOS attacks. I.e., only allow UDP connections back to the origin unless the server explicitly allows non-origin access from the requesting domain.
This approach seems a lot simpler and opens the doors to all kinds of improvements not just those designed into HTTP/2.
Tons of discussion and nobody mentioned this:
https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protoco...
Reason for using TCP is that using UDP requires application specific implementation and making it good and efficient would require more work than making rest of the software. That's why the applications only use UDP where it really does make difference. Others just won't bother.
It was designed by telcos for telcos; I don't know if they ever got around to using it. It can't be used in the normal Internet because it doesn't pass through NATs.
You interact with it every day if you use a cell phone. The control plane of the nodeb/enodeb(or "cell towers") and further into the core network in UMTS/LTE is using SCTP. Control messages for voice calls and SMS is carried a top of SCTP. Control plane interconnect between carriers for roaming is done over SCTP.
(Though there's still plenty of gear still using TDM or ATM)
The only problem is: good luck getting corporate firewalls to support connection state tracking for your custom UDP-based protocol, and getting IT managers to turn this on.
I don't see the point of the joke/sarcasm. UDP has its use if you want to have real time stuff and more granular and fine-controlled data. TCP is just for larger batches.
Of course UDP is useless in many cases, but it does have its uses.
It looks like this is just mocking people who can't follow the "don't reinvent the wheel" rule, but why assume that all types of wheel are already invented? P2P networking needs fine control. Bare metal is not here for nothing.
Once the non-TCP protocol has been written, we'll need to make sure it's portable through corporate and ISP firewalls, so we should write an HTTP extension that tunnels the non-TCP protocol over HTTP proxies without using CONNECT, and on port 80. We'll need to patch the proxies and the browsers and the web servers, of course, but it's all in the name of a better, newer protocol that doesn't have the clunky overhead of TCP.
And at some point someone will notice that you can't do ECN with existing UDP implementations, so it'll start all over again but this time assembling the raw IP datagrams in userspace, and then that will have to be moved into the kernel to perform well...
I thought Googles experiments have showed that this is less of an issue that what people generally think. Most networks pass UDP traffic just fine. And most failures occur early phase enabling reasonable fallback policy to TCP
For example, most modern-day commercial firewalls more closely resemble a NAT machine than anything else. All your packets may be changed as they pass through in order to verify their authenticity and integrity once they return. And if the protocol isn't supported, it may not be passed through at all.
If you want to support existing networks, it has to be either TCP or UDP. In order to support next-generation firewalls, you also have to use the OS's tcp/ip stack. In order to support typical commercial network/proxy configuration, you have to use a higher level protocol like HTTP. And this isn't even talking about the "features" of the different protocols and how they're handled.
A replacement for TCP has to be done by a standards body and accepted by vendors, or [at least] half of the world will never be able to use it.
Oh, absolutely. There are plenty of successful, purpose-built protocols that used UDP before it was cool: DNS, NTP and RTP, to name a few."
DNS and NTP are built over UDP because they are light weight applications. RTP is built over UDP because you don't care about loss.
Hell even DNS still uses typically uses TCP when doing zone transfers between DNS servers because of the amount of data involved.
What point is the author trying to make with this statement?
It seems to be they don't understand TCP. Indeed, TCP has drawbacks, but without studying TCP they probably repeat old mistakes and will add new ones (not present in TCP).
Why the hell is wnevets' comment flagged? Does this just happen automatically when enough downvotes pile up? If "flagged" is a separate thing for really objectionable stuff then it doesn't make sense in this case.
Oh this is nice. Simple points of order concerning HN operation are downvoted now? (Hint for drive-by downvoters: if you don't know what we're talking about, you probably have "showdead" set to "no".)
Open the page in an incognito tab, so HN won't know who you are. (or log out if you prefer) I expect that your original comment will no longer be visible. This appears to be similar to the hellban functionality, in that it's hiding from you the fact of flagging this one comment. Although I have no problem with the use of hellbanning, I don't see how the behavior we see here, with respect to a mildly sarcastic response to more strident sarcasm in TFA, could be considered a good thing. Hmm, maybe I'll write a Chrome extension to remove this misfeature.
I think they are actually serious. And it makes sense. TCP is holding back a lot of stuff. Its a legacy thats grandfathered in and we can do better both in terms of general purpose and specific needs.
Not to mention, on mobile browsers as soon as you zoom in the text, to save the wasted space with the logo on the left, the logo also gets bigger and move to the middle of the screen, covering the text.
I recall one of the full professors back in grad school commenting about how the (print) layouts in Wired magazine were so hard to read, and lamenting about his old eyes. He was immensely gratified to learn that we young grad students couldn't read the small reflective silver block text on neon orange backgrounds either.
But for the NoTCP site, I have a sneaking suspicion that this may have resulted, in-part, from only previewing the site on HiDPI (aka Retina) displays. It's an increasingly easy trap to fall into, especially for "casual" designers. My eyesight kinda sucks, but I've noticed that really good modern HiDPI displays make much lighter fonts comfortable than I'd otherwise be able to use.
I have a better-than-retina 3200x1800 13.3" Lenovo Yoga 2 Pro display and this is hard to read. I see this argument all the time, but I really don't think HiDPI displays are to blame, OS X maybe.
Good point. FWIW, I've had a horrid time getting decent readability with Windows {7 thru 8.1} on a Retina Macbook Pro (via either Boot Camp or Parallels). I've attributed most of that to legacy Windows apps, which seem to be a horror show in HiDPI mode. I eventually just had to set that VM to a much lower resolution to make the main app (CAD suite) usable there. It's almost shocking how much better OS X handled the low to high DPI transition than Windows has been able to manage.
I'm not one to complain much, but it is unreadable. My tired eyes were not happy with their decision. May I suggest something more readable to the people behind the website? Please and thank you.
I have found that high contrast is actually bad for my eyes. I regularly spend 8+ hour stretches programming. If my screen is black on white my eyes get tired after about 6 hours. If it's white on black I get similar but less pronounced results. However, if I use a light grey on a dark grey, with moderate contrast, I can work indefinately, or at least my eyes are no longer the limitation.
You can lower contrast somewhat without making text unreadable, this site suggests #333 on #fff.
https://ia.net/know-how/100e2r
I think the lowest contrast that is still somewhat acceptable for me is the solarized theme, but even that goes over too close to the low contrast end:
http://ethanschoonover.com/solarized
Rather than have web sites alter their color schemes to suit you, and cause trouble for others in the process, why not turn down the contrast on your screen?
I have my monitor calibrated for photo editing. I'm not going to mess with the brighhtness and contrast because some site is using a hard to read colour combination.
If it's a site that's worth reading I'll overide the CSS with Stylish otherwise I'll just not bother reading the site. It's funny how many sites fall into the later category.
Good answer! It didn't occur to me that there might be useful calibrations where, say, pure black on white is hard to read, but you still want pure black and white to be at those levels for other purposes.
Could you? What fix can I reasonably (which I will arbitrarily define as excluding manual DOM/CSS twiddling) apply to this site to make it more suited to my high contrast desires?
I have Stylish installed on Firefox. Then I click the "Find styles for this site" menu item, download the "Hacker News Solarized (Dark)" theme, and enjoy high contrast YC on all my devices. Takes a few minutes at most, and one Firefox restart if you didn't have Stylish installed already.
Why not use f.lux? At work I turn it up to max at all times of day and it's really much easier on the eyes. I hardly noticed the redness after a day or two.
I agree; I find myself frequently having to disable all CSS to even read a site[0]. If you'd like an easy way to toggle all styling on a page on or off, you can use the Read Easily[1] FireFox extension.
As someone who has no problem reading this (or many of the things linked to by some of the child comments), can you explain what part of it you struggle with? I'm not on a particularly fancy resolution or anything, just a standard, non-retina Macbook. I'm also only 24, so perhaps my eyes are just "still young.
"UDP" would have been a better name than "NoTCP". (Because non-TCP would also include ICMP, for example). Also, why not just go to IP, rather than UDP? Much more flexible than UDP and even more efficient.
...so you'd rather implement your own congestion control, your own windowing, your own reordering and reassembly, rather than just using TCP and letting the kernel do all that for you? To send a single file? And hopefully this self-made congestion control cooperates with the rest of the traffic on your connection.
It doesn't sound to me like your use case would buy you much over TCP with selective ACK. Either way you have to resend dropped packets and ultimately reorder things.
For a use case like HTTP/2 with multiple "channels" I could see a reason to avoid head-of-line blocking (though I'd rather just use SCTP), but for a single file?
Actually its not the ordering thats the problem per-say its "packets in flight" Each packet needs to be accounted for, and only a certain amount of packets can be unaccounted for "in flight" before TCP starts throttling back.
This means the higher the latency, the lower number of packets a second can be transmitted.
The problem is that making a reliable transport protocol that's also fast, efficient and plays nicely on a congested network is really quite hard. (this is why aspera can charge as much as they do for what is effectively a thin wrapper around scp.)
There are a couple of ways to get fast throughput over high throughput/lossy links, the easiest is to use multiple connections. I wrote a python library that does just that. Between london and Sanfran you can expect about 12megabits a second(up to about 25 burst), with 20 concurrent connection I could get just over 150 megabits a second(I hit vpn limits then).
"Oh yes indeed. Just as Node.js and NIO proved to the world that bare-metal performance is always worth the consequent unreadable code"
Node.js is hardly bare metal and non-blocking IO is hardly new; Unix programmers have been writing select()/event-loop and poll()/epoll() servers for decades. The presentation of this false dichotomy between Node.js representing "bare-metal" at one end and readable code at the other suggests the author is some type of opinionated Ruby/Rails hipster programmer upset that his pet language and framework is falling out of favor, and in no small part because of its abysmal performance.
No, I think the author genuinely views Node.js/NIO as a bad performance/readability trade-off that was over-hyped and inferior to other existing platforms.
Well, they're right that NIO code is pretty low level and hard to read. But then, 95% of Java devs won't need to touch NIO, they'll just use Netty which does all the NIO for them.
You don't have the privilege of not caring about, at least if you're in the position where you have to make decisions involving it. Quick: what's best, ints or strings? That entirely depends on how you want to use them, doesn't it?
That's fine, but don't call yourself a software engineer. Programmer or developer sure, but engineers understand the underlying layers. Otherwise when shit really hits the fan you will only be able to throw your hands up and consult stack overflow.