Hacker News new | past | comments | ask | show | jobs | submit login
UDP vs TCP (gafferongames.com)
453 points by colinprince on Dec 28, 2016 | hide | past | favorite | 190 comments



The main difference between TCP and UDP, as this programmer discovered, relates to quality of realtime service.

Times to use UDP over TCP

  * When you need the lowest latency
  * When LATE data is worse than GAPS (loss of) in data.
  * When you want to implement your own form of error correction to handle late/missing/mangled data.
TCP is best when

  * You need all of the data to arrive, period.
  * You want to automatically make a rough best estimate use of the available connection for /rate/ of transfer.
For a videogame, having a general control channel to the co-ordination server in TCP is fine. Having interactive asset downloads (level setup) over TCP is fine. Interactive player movements /probably/ should be UDP. Very likely with a mix of forward error correction and major snapshot syncs for critical data (moving entity absolute location, etc).


People talk like most games use UDP, but they don't.

Pretty much the only genre that does is First Person Shooters.

MMOs (like world or warcraft) MOBAs (like Dota) RTSs (like Starcraft) Action RPGs (like Diablo)

All of these are action games and use TCP.

In most genres people would rather have the simulation pause when there is packet loss, and then run fast to catch up rather than have unreliable data about player locations.

Due to the large number of geographical locations you can have your servers in these days, it's common for most of your players to have pings <30ms. With that kind of latency and a few tweaks, it's possible for a single packet lost to be recovered very quickly with TCP.


> MMOs (like world or warcraft) MOBAs (like Dota) RTSs (like Starcraft) Action RPGs (like Diablo)

Dota 2 definitely does not use TCP. The Starcraft games I assume can also work with TCP but I doubt that they use it as a primary choice of communication. I would love a source for this.


Starcraft uses a mix of TCP and UDP[1]. I can't find the link now but I remember something about a late switch from TCP to UDP in the beta of the first StarCraft 2 game and some pretty unstable network gameplay while they were tweaking the error handling.

1. https://us.battle.net/support/en/article/300479


Most sites that I come across say that Brood War primarily uses UDP, along with most other similar vintage Blizzard games.

http://wiki.teamliquid.net/starcraft/Port_Forwarding


starcraft II uses TCP and UDP. I have verified this by reviewing my firewall, I have not gone as far to take pcaps and see what packets are doing what but I thought this was relevant enough to say.

as the parent mentioned -- UDP where time is more important than reliability. I disagree with his last comment on building your own reliability -- use TCP for that.

tcp/1119, udp/dynamic (listed app definitions for app-based firewall)

some additiona blizzard info --> https://us.battle.net/support/en/article/configuring-router-...


True but you can create a reliable UDP implementation and get the same TCP features you need (resending/ordering/ACK). Many games use UDP and then some form of reliable UDP where important messages are ACK'd (game start, game end, global events etc).[1] Almost every multiplayer game I have worked on has a flavor of this, where there is a bit or flag on important messages that need reliable tracking over UDP. Everything else is just broadcast like player positions, local action and more where packets can be missed and extrapolated/interpolated, predictive methods and lag compensation.

There can also be some packet loss increase when using TCP mixed with UDP[2]. Most games built recently probably use more RUDP than TCP. But when you need to use any UDP at all you may as well just go reliable UDP. TCP definitely works for non-realtime games and turn based especially but if you ever want some non essential systems updating where packet loss is ok (maybe world physics, or events that are just decorative/fun) then reliable UDP is best in my opinion.

SCTP was supposed to fix much of this but really reliable UDP does almost everything you need for realtime and not, so if investing in a network lib/tech that is the best choice.

enet[3] is a popular reliable UDP framework that was the base or basis for many others. RakNet[4] also has some nice implementations of it. There are many others now but some larger common systems based their systems on these, Unity is one.

[1] https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protoco...

Characteristics of UDP Packet Loss: Effect of TCP Traffic

[2] http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM

[3] http://enet.bespin.org/

[4] https://github.com/OculusVR/RakNet


This is terrible advice. These games use TCP because they can get away with the latency and often have game designs where latency is a non-issue (300ms in WoW won't ruin your raids or quests). Latency spikes are also very noticeable in these games. Starcraft adds latency locally to your inputs so they're played on all computers simultaneously.

I'd argue that none of these games require time-critical data and often have latencies much higher than they would with UDP. None of these games have networked physics either, TCP breaks down badly in that case. I'd also argue that great network developers come in very short supply and therefore most companies are left to their own misconceptions. (Then again, most game developers grossly overestimate their own skills, nothing new here :D)

I don't know where you get that "people would rather have the simulation pause when there is packet loss, and then run fast to catch up" idea but my experience is the exact opposite.

"Unreliable data about player locations" is a myth. Its just as "unreliable" in TCP with the difference being that in TPC the protocol will arrange for the packet be resent; at which point its no longer relevant to the current frame - you've wasted CPU and bandwidth, crippling your game designers in the process, often without realizing so. You rarely ever want a reliable world state, just reliable actions.

With UDP you actually rely on unreliability to only ever use the latest game state; actions you want reliable or ordered can be implemented very easily on top of UDP or left in TCP. Client-side prediction takes care of dropped frames such that missing a datagram (and therefore that frame's player positions) isn't noticed by the player. A delayed TCP packet will do the exact same, but waste CPU/bandwidth in the process.

I've seen my share of "senior" network programmers who couldn't make a networked engine that is actually stable. I can understand why some teams look at TCP and naively claim "problem solved!" But it doesn't make TCP the best choice because you can appeal to authority with big titles using it :)


Not doubting, I see some other sources say the same thing, but why do you need to open UDP ports for WOW?

https://us.battle.net/support/en/article/firewall-proxy-rout...

Is it something that changed over time and that page is out of date? Is it just doing TCP over UDP for some esoteric cases?


I don't know why it lists all three ports under both UDP and TCP for WoW. WoW uses 3724 over UDP for it's voice chat, but the other ports are only used with TCP.


>> In most genres people would rather have the simulation pause when there is packet loss, and then run fast to catch up rather than have unreliable data about player locations.

I thought that modern games use interpolation to compensate for this, because network lags are always present in WAN.


You can't use interpolation to compensate for late packets. Interpolation is generating data between two data points that you have.

You are talking about extrapolation which is the prediction of future data from previous data points.

Once again, First Person Shooter games will tend to do extrapolation, but few games of other genres do that kind of thing.


It's easier and more robust to do NAT traversal with UDP than with TCP. The Xbox game network services implement TCP within the Microsoft domain on top of UDP!


Dota 2 and Starcraft 2 both use UDP.


Only if all players are from close regions. Will still take 100-200ms for cross continent.


I always wondered why nobody used UDP for bulk data transfer: Ship all the chunks over with a sequential number identifying each, then when you've reached the end, have the client request any lost packets, then repeat the cycle until all are transferred. After all, the client doesn't need everything in sequential order if it knows the size, and it can deal with holes as long as they're eventually repaired.

This gives you (in theory, I've never tried it) better throughout, and you get the equivalent of TCP's retransmission without the window semantics, since you move all the retransmissions to the end. On the other hand, it's probably bad for everyone else, since there's no congestion control.


> ship all the chunks over with a sequential number

Except that if you ship them too fast, they will at best simply build up in a buffer somewhere (e.g. near a bottleneck link), or more likely they will overflow causing many of your packets to be lost. So not only did the recipient not get them (and you have to figure this out and resend them later), but you also wasted a bunch of resources sending packets only to have them dropped.

And if you send them too slow, you are leaving some potential capacity on the table. So you need to send them fast, but not too fast.

So you add some additional logic to implement flow control and congestion control. And that's TCP, pretty much.


Exactly. The core feature provided by TCP is not reliability, that is trivially easy. What TCP gives us is robust flow control that can detect and adapt to current network capacity.


CNN has a video transfer tool for b2b subscribers that uses multiple UDP streams to increase transfer speeds. Instead it saturated and brought down our company firewalls, which were older and our main bottleneck. All for just a potentially few more megs of speed on TB sized videos. Gotta be careful with UDP.


That just sounds like your firewalls are made of pentium pros and duct tape. TCP can oversaturate a connection too, and a firewall should be able to handle large packets at line rate.


Yeah the firewalls were ancient and already scheduled for an upgrade. This helped accelerate the schedule for sure. The company had about 800 employees doing all kinds of things, watching Netflix, watching videos on YouTube and Facebook, downloading whole ISO's, not to mention many business processes transferring files around, but the second one person started using this video downloader tool it brought everything to a crawl.


> a video transfer tool for b2b subscribers that uses multiple UDP streams

That sounds like FASP (http://asperasoft.com/technology/transport/fasp/)


I believe something like that was the underlying tech. I researched it at the time but I just can't remember 100% now, although I think that was it. It actually was implemented via a Java applet in the browser, so there was no way we could block or restrict a specific exe with group policy. The firewalls were already scheduled for replacement/upgrade, so for a few weeks we ended up just watching traffic and slapping wrists if more than one person ever tried to use the tool at the same time.


A robust flow control that's tuned for bulk downloads, and badly adapted for small real-time streams.


Sure. And not even the best known flow control mechanism. But nonetheless that is the important innovation of TCP, not reliability. The post by lobster_johnson upthread implied that you could use UDP for "bulk data transfer", which basically doesn't work without reinventing something akin to TCP.


There is actually such a protocol - UFTP. It was designed for high-latency connections such as satellite links. It can use multicast to allow multiple recipients over such a link. If memory serves, it was designed by Gannet to help transfer news content nationally for USA Today.

I've tested it on "long fat pipes" when evaluating it for a project a number of years ago. The numbers don't sound that impressive now. I was able to saturate the 100Mbps NIC of the slower server in the transfer. The transfer was from Northern VA to LA over the Internet. This was from data center to data center and the network capacity was such that I knew that the transfer rate would not cause any serious packet loss.

Using simple TCP over the same link, same computers was limited to around 40Mbps. More modern TCP implementations have a larger window and are able to better saturate high-throughput, high-latency links.


UDP messages are limited to 64K. It can be a pain to reassemble. lots of ugly corner cases.

We had need to send one type of message split due to size with udp. We were on the same subnet sending and receiving so it worked almost always. We were also using multicast.

Its very very hard to figure out whats going on when things don't work ( You don't know if the control messages are getting through, when you ask again or if the message just isn't being sent). At some point you are reinventing tcp/ip.

I always liked the way udp messages are received though, all or nothing. Not the byte stream that is tcp which needs to be broken into messages manually.


Picking a smallish, fixed chunk size should work, and it could even be made to self-regulate. Reassembly is trivial -- allocate the file first, write chunks to their right location. And for the control stuff (i.e. requesting chunks) you just use a TCP connection. UDP for the data only.


If you do the obvious work to convert UDP to a reliable streaming mechanism, you will generally more-or-less reinvent TCP, only probably more poorly, and would probably be better off just using TCP. UDP is a good choice when you have options or requirements that TCP can't accommodate. For instance, the classic case of streaming audio has both; it both requires timely delivery and has some very sophisticated techniques for dealing with missing packets with a lot of local intelligence about human perception and such built into the codecs. If you can drop packets without issue, if future packets completely supercede past packets (FPS network gaming for instance), there's a lot of possibilities, but if you're just going to recreate reliable streaming you probably be net losing on account of the effort spent doing that instead of something useful, and bugs in the implementation.


Why are so many comments here ignoring the purpose of the article?

The purpose of this is for games, not web traffic or torrent2.0.

In real time games where you're syncing physics states or player locations data >200ms isn't just useless it's actually negative.

This article is 2+ years old, and is the foundation I suspect for libyojimbo which hybridizes UDP low latency communication for large player real time games and reliable messaging (basically custom TCP?) for less time critical things.


I don't know, but that doesn't have anything to do with my comment, which is entirely technical. Perhaps you meant to put it somewhere else?


You've kinda answered your own question there, congestion control becomes a big issue(also MTU discovery to a lesser degree).

Games sit in this happy space where the data rate is ~10-20kb at most so they don't need to worry about congestion control as much.


Additional information: The source engine (valve) has been limited to 30 kb/s for more than a decade.


There are many protocols that use this approach, Aspera FASP is possibly the most advanced one. It uses a TCP control channel to manage the rate on the UDP data channel.


On LAN, this is basically how Symantec Ghost works; it multicasts the image over UDP, and each client figures out what they missed. Those chunks get retransmitted at the end.


> I always wondered why nobody used UDP for bulk data transfer

That would be odd. It's not true. There's TFTP all the way up to UDT.


Although to be fair, given that with TFTP each packet must be acknowledged before the next one is sent, you're actually achieving a way worse throughput than TCP. Not a very good example on how to use UDP efficiently IMO.


UDT is a better example.

http://udt.sourceforge.net/


What it really comes down to is file block size granularity. TCP is simply using a much finer block size (~1500 bytes) which means a higher likelihood of arriving than for a large UDP packet. Furthermore, you'd still have to implement your own checksumming algorithm to ensure data integrity as UDP doesn't require a packet checksum for IPv4. I suppose you could offload file-block integrity validation to another thread so you can receive and verify in parallel, but at some point you have to checksum every block anyways which means the full file must be processed to figure out where corruption might be before sending retry requests for the missing packets.

I see what you're getting at by trying to avoid processing the packets in order if it wastes time, but ultimately I think its a micro-op that introduces other bottlenecks. That's why bit-torrent still typically uses TCP even though they could potentially receive file parts in any order.


But bittorrent breaks the files in to chunks it can receive out of order and sends those over TCP, so TCP only preserves ordering within a chunk. It might even get chunks from several different people.

The real answer is that chunking over TCP is efficient enough that little is gained by chunking over UDP.


Yes what I'm saying is that bit torrent uses TCP to segment fileblocks instead of using UDP to send the file block as one large datagram


That sounds like a TCP congestion control mechanism, except worse! TCP leaves much to be desired in terms of performance, largely because their congestion control fails miserably on lossy links (TCP RENO). There are better backoff mechanisms that can be added on many distros, like Westwood. And then there are altogether more efficient protocols like QUIC. But designing one for general purposes from UDP is easier said than done.


NFS on Linux defaulted to UDP for what seems like a really long time. There were perf issues with TCP (can't remember what they were).


NFS used to default to UDP for the same reasons that games still would use UDP, it just needed the data as quickly as possible since it was blocking IO. Now a days, we don't need UDP for NFS because networks are a lot faster and the advantages of TCP start to over rule UDP.


The original reason was that using UDP was much faster CPU-wise back in the 80's. There was a requirement to go fast because it was used for transferring data in bulk, not events like games.

When Linux came around it just followed suit. TCP support came to NFS only later with other bells and whistles in later NFS versions, and Linux stuck with basic NFSv2 for a good while.

NFS was designed for local-area networks, so it didn't need to worry about working over the Internet.

In addition to the network being much faster relative to CPUs back then, the TCP stacks hadn't seen the tuning they have now. Making TCP fast came later.


NFS (was) stateless and idempotent. Lose a packet? Send it again! Which is mostly what you want, since there can be dozens of processes performing IO and they're all independent. Last thing you want is head of line blocking.


> I always wondered why nobody used UDP for bulk data transfer

1) Bulk data transfer doesn't work over UDP.

There is no flow control whatsoever with UDP and no guarantee, it sends stuff without distinction between a 56k uplink and a 10Gigabit LAN. The packet loss and lack of reliability are atrocious when you send non negligible amount of data.

2) You'd need to create a protocol on top of UDP to handle the basics (controlflow/error/retransmission), that's equivalent to re-inventing TCP. Don't re-invent your own TCP, just use TCP ;)


>1) Bulk data transfer doesn't work over UDP.

Tell that to all of the people that use TsunamiUDP to do bulk data transfer. It uses TCP as a control protocol, but the data transfer is all UDP.


It sounds like they would agree.

Bulk data transfer doesn't work over UDP, but it can work over UDP and TCP.


The data transfer itself is happening over UDP.


UDP is usable for this when used to transport a file transfer protocol. I've seen for instance Kermit over UDP. Basically, if you transport data using a protocol that already takes care of integrity, using TCP is more or less redundant and just adds annoyances (connection management).


Back in the day of hard drives, the random access I/O would be nasty and you'd have to be careful about creating sparse files, so you'd end up double writing the missing packets. you'd also still need congestion control so might as well just use TCP to retransmit.


Bad for everyone else, and it's up to the implementation to handle flow-control. Congestion-control handles network related loss/latency/load. Flow-control helps with receiver related loss/latency/load.


A lot of the internet's bulk data transfer uses UDP, torrents.


BitTorrent is mostly TCP these days, though, as pointed out elsewhere in this thread.


In practice, I think you'd end up having to reimplement most of TCP on top of UDP anyways.


This isn't really true, or at least not the main reasons to choose UDP or TCP .

In modern days UDP is really only used in situations where you can afford packet loss and CPU is expensive, or when you need to bypass the built-in behavior of TCP on your OS. It's not as much about latency or data loss, usually UDP is used in places where processing power is expensive or load is too extreme to justify TCP.

Sometimes, like with DNS, it's better to just resend to request than burden a massively parallel server with maintaining TCP state and checksums.

The main issue I have with the above comment is that UDP absolutely does not guarantee better latency or have higher priority than TCP packets. The packets usually queue in the NIC buffers and downstream identically regardless of which protocol you use. The main draw of TCP besides data and ordering guarantee (which are CPU costly) is that (depending on your operating system) it follows some kind of automatic rate limiting algorithm be it TCP Vegas(Linux, delay based) or Reno (windows, loss based). These algorithms attempt to limit your connection "stream" to the line rate automatically and play somewhat nicely with each other on large scale.

In contrast, the main problem with using UDP is figuring out a smart way to rate the limit to avoid excessive packet loss. In some protocols like DNS anycast type things where only raw throughout matters, who cares if responses drop. Other times, like with LEDBAT or Microsoft's BITS, your main reason for using UDP is to roll your own rate limiting protocol, since otherwise you're stuck with what the OS gives you on TCP.


I'm not sure I follow your argument. Say I want to communicate from player A the position of player A to player B. The position is really only relevant for a 200 ms time frame or so (the game might support teleports, jumping, dashing, etc). In a TCP setting, even if a packet is old (>200ms), I have no choice but to send it regardless, and all new packets must queue behind it. The whole point behind using UDP for games is that I can control what the resend policy is, which determines what ends up in the buffers you are referring to in your post. TCP gives me not only little to no choice in the matter, but often does the worst thing in a critical situation, especially in a game where input latency is 6 ms and the tolerance for network latency is <50 ms.


You're right in that UDP will let you send the most updated data without requiring everything to be sent. My main issue was that UDP does not give you lower latency. TCP algorithms are extremely aggressive to the point I doubt you could get more recent packets on the wire with UDP.

The problem is, the OS throws everything into the same buffer, and unless you're clairvoyant you won't be able to skip sending old packets into the buffer until it's too late. Basically by the time you know the data is outdated it's already on the way to the NIC buffer and you can't stop it, making TCP and UDP equal in this regard


You're only considering the case when no packets are lost.

Once TCP drops a packet, data sits in the receiving buffer getting older and older instead of being delivered to the program.


It's way more rediculous than you think. OS network stacks try to do the reasonable thing in most cases if possible. In the case of Linux TCP Vegas is very unlikely to drop any packet since the main algorithm is RTT based not drop.

Red and hopefully codel can combat buffer bloat to some point but in the end all packets go to the same place


I see packet loss and spurious resends (lost acks) all the time on WiFi in certain locales. You are massively underplaying packet loss imo.


I think he meant the situation where the network drops a TCP segment, leading to head-of-line blocking.


Waiting for retransmission of a lost segment in front of you in the queue is massive latency.


TCP does head of line blocking so your entire argument is just wrong. TCP by design cannot give you the latency guarantees that UDP can.


I'm getting the impression that you're mostly only taking the endpoints into account. There's tons of stuff happening in between the peers: having worked on specialty routing equipment, I can add that the prevailing philosophy was that generic UDP packets were lower quality-of-service than TCP. That is, if the router was under high load, UDP would be dropped before TCP.


Windows doesn't use Reno. It uses Compound TCP, which is somewhat like Vegas.

https://en.m.wikipedia.org/wiki/Compound_TCP


Another article on this site addresses TCP plus UDP mix. The tl;dr is that they should not be used at the same time because TCP retransmission will interfere with UDP (and routers give TCP priority).

As long as TCP happens outside of the "hot" loop, it's fine.


> (and routers give TCP priority).

What routers? Is this something that stupid routers did in the '90s and people are still worried about, or are modern routers actually behaving this way? (And if so, why?)

TCP retransmission interfering with UDP sounds like something that shouldn't be a problem on any sane network, especially if the application authors are smart and use something like LEDBAT for their TCP traffic when they're also doing latency-sensitive UDP communication.


The internet is generally built on the implicit contract of "TCP-friendly flow control". If your homegrown protocol backs off slower than TCP, some router configurations will drop your packets first.

There is no general policy against non-TCP packets of course, as lots of important internet services are non-TCP. Just flows that don't respond to congestion.

It's not a vendor specific thing, all the major router vendors provide ways of doing this.

(Note: router != your home NAT box)


> If your homegrown protocol backs off slower than TCP, some router configurations will drop your packets first.

I think that's only if you're being less responsive to congestion signals (drops, ECN marks), and you're using more than a fair share of bandwidth or trying to use any available bandwidth. Fixed but low rate flows (eg. VoIP) shouldn't be penalized until the link is congested with so many flows that the VoIP is trying to use more than 1/N of the bandwidth.


You'd be shocked at the things home routers do. There's a reason most people don't try to implement NAT traversal and use common middleware.

Since UDP isn't as common of a protocol I can certainly see some routers treating it a second class traffic when it comes to packet shaping.


I'm well aware of how dumb home routers can be, but a general de-prioritizing of the transport protocol that DNS uses is both really dumb and pretty easy to detect. Are you aware of any router vendors that have actually shipped such a configuration? (Or for that matter, any consumer router that has shipped with any prioritization rules enabled out of the box?)


Every home network I've seen ends up using the home router itself as a forwarding DNS server as configured through DHCP.

That said, I doubt home routers are de-pritoritising UDP to any extent, or it would be a big topic of discussion among gamers.


DNS strikes me as something relatively painless to deprioritize. Small packets, not all that latency-sensitive for most use cases.


DNS isn't VoIP, but most DNS traffic is very much latency sensitive. When you click a link to a typical modern web page, you trigger multiple HTTP connections to load the page and its many resources hosted on dozens of domains. The DNS lookups are on the critical path for all of those requests.


Yes, but the HTTP requests themselves dominate the overall load time, even if DNS takes a bit longer. Plus there's caching to reduce somewhat the need for the DNS requests.

(Of course, I'm pulling all of this out of my nether regions without a lot of thought, and you may well be right, the sheer volume of such requests might lead to problems.)


It looks like on Windows a typical DNS retry timout is 1 second, with backoff if multiple retries are needed. If a DNS packet needed for loading a web page gets dropped, it's very likely to increase the overall page loading time. Re-ordering DNS to be delivered only when there's a lull in the large TCP packets would almost serialize the loading of resources from different domains.

DNS caching is a good thing, but it certainly doesn't eliminate this problem. Web browsing still produces a lot of DNS requests, and any cache upstream of the bottleneck (ie. any cache operated by your ISP rather than in your own router) doesn't help against loss during congestion.


That would be a disaster.

Without active queue management, a single bulk upload can break DNS. When the bottleneck link (usually your cable/DSL modem) is saturated by the bulk upload, it will start dropping new packets that come in while the queue is full.

When a single packet worth of space frees up, the chances are that the bulk flow will instantly take it, such that nearly all outbound DNS packets get dropped, and DNS queries time out.

The solution is active queue management (CODEL is a good choice) at every bottleneck.


I'm a network engineer and I think this is the first time I've ever heard the claim that "routers give TCP priority" (ICMP is often de-prioritized but that's on the control plane).

Exactly which routers are alleged to be doing this?


There are quite a few "reliable UDP" implementations out there - I wonder if the same considerations would hold. Guess it might depend on the implementation of reliability using UDP.


Damn. I disagree with all of your points.

Use UDP only if you satisfy all the following conditions:

  * Don't care about loosing data
  * Don't care about receiving data out of order
  * Not sending stream of data but individual messages ( < 580 bytes)
  * Don't care about the other side of the connection being dead or terminated, and not knowing about it


UDP doesn't cause things like not knowing if the other end is gone. You're being kind of ridiculous.


UDP is connectionless, it just sends data to <somewhere>. Doesn't care if that somewhere is online, or available, or exists at all.


If you don't check for returns, TCP isn't going to help either.

The requirements you listed don't make sense as written. Nobody would ever use UDP.


TCP establishes a connection when it starts and it has build-in timeouts (and various mechanisms) to detect dead connections. TCP will signal to the application if the destination is unavailable or lost.

The requirements make sense and the common usages for UDP fit these requirements.

It's correct that [almost] nobody ever uses UDP. It's a niche protocol for a very few specific use cases, one of which is real-time FPS/RTS/MOBA games (which fits all the criteria I listed).


> it's correct that [almost] nobody ever uses UDP

Skype, Facetime, WhatsApp, Hangouts, Telco VoIP, Facebook Live, YouTube Live, Surveillance Cams, most VPNs.


TCP by itself does not detect dead connections.


while its technically not a timeout, it is stopping the network traffic

the TCP packets require an ACK ever n packets, as defined by the FREQ field. If the ACK isn't received, no further packets are send.


Comparing TCP and UDP generally is nonsensical.

The sensical comparison would be TCP vs whatever protocol you build or use. It might be your homegrown protocol, but if you have problems with TCP, you should probably look at other ready-made protocols first. For games, there's eg ENet.

(Not to say that TCP is generally out of the question for games, eg. WoW uses it apparently successfully)


Nobody ever mentions that UDP has one additional feature that TCP doesn't: preserving of message boundaries.


A datagram-like protocol with the reliability of TCP might be nice.

I once implemented a communications protocol that worked with variable-length independent messages over TCP. It felt a little silly pushing distinct datagrams into a stream, that would be chopped up into a packets, sent to another PC, where the OS would then reassemble those packets back into a stream, only to be chopped up again in messages.

Implementing the system for chopping up the stream really felt like it should have been done by a lower level as well, not in an application


Seems to me that ZeroMQ basically implemented this - length-prefixed framing is part of the wire protocol.


> * When you want to implement your own form of error correction to handle late/missing/mangled data.

if this is a consideration -- using TCP may be best option.


IMO the rule is actually really simple. If the correct thing to do when you lose a packet is to retransmit it use TCP, if it's not use UDP.


I think that rule is a little too simple, because it might be correct to retransmit but not to have the retransmission block processing of later packets. I'd say one should probably use TCP when any of the following are true:

1) The correct interpretation of later packets depends on side effects of earlier packets (e.g. commands in an SSH session, OpenGL commands)

2) Raw bytes-per-second throughput is a major consideration (e.g. file transfer, streaming video)

3) The application readily tolerates high latency and jitter (e.g. email, text chat)


You can also use something like enet which allows for both reliable and non reliable channels.


also wherever you need real time there is mostly UDP in use, For SIP + RTP, webrtc is a little bit complicated but the real time part is based on UDP. all those have a mechanisms implemented for packet loss and how to handle it.


Isn't UDP more secure than TCP since it does not implement a sequence number (among other things)?


Questions like "more secure" and "less secure" only make sense in some specific context with a well-defined threat model: what it is that you want to keep safe from which classes of threats.

For many (if not most) threat models, the lack of a sequence number does not provide adequate security.


Do you want to hear a joke about UDP?

You might not get it, but I don't really care.


Here's a joke about TCP.

Here's a joke about TCP.

Here's a joke about TCP.

Here's a joke about TCP.

Here's a joke about TCP.

> ACK


Do you a hear joke UDP? about


OSI jokes work on too many layers.


My favourite version of "stupid user error"/PEBKAC/ID-10-T is 'layer 8 error'...


The recommendation to avoid TCP altogether is surprising to me. Having encountered a number of video-conferencing systems which are in a similar space, it seems pretty standard to have separate real-time and control sockets on UDP and TCP respectively. I skimmed the linked paper and didn't find it conclusive; can someone summarize how it is that having a TCP socket can affect UDP traffic on the same interface?

All that said, I certainly see the argument for an all-UDP protocol in terms of defining your own retransmission approach, or attempting to avoid it altogether with forward error correction or whatever.


I too was wondering how TCP and UDP can affect one another on the same interface and the explanation I read is that at the lowest level both UDP and TCP segment data into packets and place them onto the same network adapter's queue which can affect they way they are transmitted under congestion depending on the device/drivers.


This makes no sense.

UDP will drop data if there is congestion. Basically, if you care about congestion and data loss, you should not use UDP.

---

If you do UDP and TCP. The UDP transfer will loose data during congestion, while the TCP transfer will slow down and try harder.

This has some usage. For instance, videoconferences software will do TCP and UDP. The TCP is used for control data -low volume data that needs to work- (authentication/joining room/leaving/adjusting quality), the UDP is used for audio/video (some pieces may be lost during congestion, it's okay).

If you use UDP for everything, you'll randomly loose control AND audio/video. It's the worst of both worlds.

Not to be dismissive but I think the guy who gave you the advise doesn't really understand networking (or doesn't explain very well or was talking about some weird usage we don't know about).


Error checking can easily be added to udp for control data. Infact in IPv6 it's mandatory. At one moment you say udp will lose out, but we know tcp will back off on its reattempts exponentially based on the congestion window. Thus udp would win out. The argument wasn't that it was a significant impact, the argument is that they can cause interference.


> the argument is that they can cause interference.

UDP ignores everything else and always expect resources to be available for itself.

We could say that UDP interferes with everything, we could say that everything interferes with UDP, it's both ways.


Who chooses when a UDP packet is dropped? The freaking network adapter device driver which is what I said in the first place!


There is a lot more in the transmission chain than just that driver.


But it's the driver of the NIC that manages its network packet queue! Then it uses a an interrupt to signal the OS that data is available or that the window is open for transmit. I'm totally aware there's a lot more to it, I do socket programming everyday.


That's not an explanation. That's just pointing in the general direction of where to look for an explanation.


Do you have a better explanation to offer because I'd love to learn? Not sure why I deserve a downvote for offering a potential solution when you with 6000 karma offer nothing.


I think the claim was far too broad to actually be entirely true, and the evidence was both narrow and weak.

The 1997 paper only shows that certain simulation parameters can produce bad packet loss for UDP traffic. But those parameters are almost all irrelevant to today's Internet: MTU is now 1500B not 512B, TCP congestion control is usually something like CUBIC rather than RENO, buffers are never a mere 16 packets long, and TCP traffic is more often relatively short-lived HTTP connections, not long-running FTP.

The paper is just another piece of evidence that TCP synchronization sucks. You are very unlikely to encounter such perfect TCP synchronization on today's Internet, and even if you did, the dynamics of introducing some UDP flows to that situation would be different.


It's a bit dogmatic to state "never use TCP". In the context of games, one example that comes to mind is turn-based multiplayer games (Hearthstone, for example). In this case, I would at least consider TCP over UDP as the overall amount of packets transmitted within a session is relatively low and packet loss is more consequential than latency.

As far as negative interactions between TCP and UDP on the same interface, I'm not aware of any but I'd love to be corrected if wrong. As you pointed out, it's not uncommon to use both. The BIND 9 DNS server, for example, allows the use of TCP for DNS queries over 512 bytes [1].

[1] https://ftp.isc.org/isc/bind9/cur/9.10/doc/arm/Bv9ARM.ch06.h...


It's a bit dogmatic to state "never use TCP".

I was wondering about this too. Yesterday in the "WebRTC: the future of web games" post we had Matheus28, the creator of Agar.io, saying that his games are built on WebSockets, a TCP-based protocol. Agar.io is real-time and massively multiplayer, and seems to run smoothly enough. So is it completely critical to use UDP?

I am curious as someone who has recently gotten into making simple Javascript games. One of the games is a multiplayer / head-to-head Minesweeper clone that I got working with Django Channels, but did not even consider the networking aspect of TCP vs. UDP. The more recent one is inspired by the real-life game "capture the flag" -- teams of Asteroids-style space ships have to defend their own territory and flag, and capture the opposing team's flag while going around shooting at each other. It features a large game board very much like Agar.io (with the game camera following your ship around). I am using Phaser.io and Node.js, and while the game is operational and pretty fun just playing with the AIs, it's not multiplayer yet. I am approaching the multiplayer aspect carefully because minimizing lag will be crucial to making the game enjoyable.

Any general advice from anyone is appreciated, I am new to both networking and game design but it's a lot of fun.


Props that you're branching out and having fun. That's what makes software special.

Agar.io is slow and simple. There isn't much state to encode and state doesn't change drastically that a few re-sent packets here and there are going to ruin the gameplay. Granted, if you're on a terrible connection with high packet loss, even a great algorithm on top of UDP won't save you.

There are better explanations than I can provide on why TCP isn't great for (parts of) fast, complex games, but it's enough to say that there's a huge difference in the complexity of state and state updates between say Battlefield and Agar.io or the game you described. Modern TCP on modern network connections will get you very far, but there's only so much you can deal with when a split second can mean the difference between "A was under the crosshair of B" and "A actually went around a corner, sorry B".

What you described is likely not going to be so sensitive. I don't think you'll personally have to worry about implementing a custom networking protocol over UDP any time soon - maybe unless you think your game is going to be played by people on terrible connections (possible!), or you end up with some abysmally inefficient state sharing over the network.

And, from a different perspective, getting caught up in those details tends to risk getting burned out. Better to see your game through to MVP then go back and make it great.


I'd argue that Agar would run smoother on UDP. The decision was not really a decision here. WebSockets provide TCP only. One can use WebRTC data channels now but there are still issues woth compatibility.

And it's all tradeoffs. For example, Ultima Online used TCP and worked relatively fine on decent networks. In case of a packet drop, you'd observe a complete pause on the world but most animations / movement had a 100ms or more buffer area. If you wanted to move some direction, your character would start moving, if there was no ack after a single step (100ms if running) then it would stop. The world was being streamed as diffs from previous state so it depended on tcp's reliability.

Anarchy online used udp for game and tcp for chat. While the OP argues against this, I think it is a good seperation. I remember playing it on a congested network and the game state fucked up really bad. Tunneling over a tcp stream to a vps server close to game servers made it playable for my case. Maybe they should have gone with tcp. Or their networking code did not compensate for UDP's issues better.

On the other hand, anything more action packed like a fps would probably be better off with UDP as the author suggests.


the post also recommends that some http traffic is OK, but doesn't that directly contradict the advice to not mix tcp with udp (assuming that the http traffic sits on top of tcp)?


It's clarified in the next line. > "The point is, don’t split your game protocol across UDP and TCP."

There is always going to be data transmitted other than the game protocol. Most likely that is going to be limited and not at the same time as real time data is being transmitted.


The entire article makes me shudder in disbelief.

It's aimed at people who don't know the difference between UDP and TCP (and possibly wet string). Yet he recommends they implement their own reliably protocol over UDP, and they avoid TCP because it's better to implement your own QoS?

Why not add obtaining PhD in quantum mechanics just to round it out? It wouldn't alter the odds of pulling it off overly.


It's framed in the context of building networking for action games that are sensitive to networking. Fourth paragraph:

  The choice you make depends entirely on what sort of game you want to network. 
  So from this point on, and for the rest of this article series, I’m going to assume you want to network an action game. 
  You know games like Halo, Battlefield 1942, Quake, Unreal, CounterStrke, Team Fortress and so on.


Regardless of the target audience, his point stands. Don't use TCP for "real time" games because when packet loss hits, TCP will make things worse by stopping to transmit later packets and then resend the old data which is now out of date.

It's definitely not an easy task for beginners, but these days you can pick up his library that implements this stuff.

https://github.com/networkprotocol/libyojimbo


Agree.

For people who are not familiar with TCP and UDP, the answer should be to ALWAYS use TCP.

The only exception is some subset of games (FPS, RTS), that this article is exclusively intended for.

Even there, for the average user who just wanna make a test game project, TCP may be easier to use and program around. For simple games (e.g. Connect 4), TCP is a better option either way.


In the article he clearly states this is for things like syncing physics in BF4 or halo.


You don't have to invent a protocol when using UDP, you can still use a library like enet or raknet which already do that for you. And if it's the better choice it's the better choice and it's not something a programmer can't learn to do.


I appreciate the first user comment on this article:

"I didn’t care for this article 7 months ago. I regret it. My whole game is useless now. I should have go with udp."


What are some alternative protocols?

It's been a while since my networking class, but if I remember correctly with UDP you have some serious issues where you can end up clobbering your network, filling up buffers in the middle and dropping tons of packets. The lack of congestion control is a huge no-no.

For instance in the example he gives, sure you can tolerate dropped packets for player-position data, but how do you know if you can tolerate sending at 10Hz 100Hz 1000Hz? Even with TCP you can't (I think....) programmatically adapt to the size of your pipe. That's kinda abstracted away for you so that you just say "send file A to B" and it does it for you

Are you supposed to write your own congestion control in userland???? Seems like this should be a solved problem


DCCP and SCTP are the two main alternative transport protocols on top of IP

  - UDP: unordered unreliable datagrams
  - DCCP: unordered unreliable datagrams with congestion control
  - SCTP: optionally-ordered reliable datagrams with congestion control
  - TCP: ordered reliable stream with congestion control
SCTP is the most widely used after TCP and UDP: it's often the transport used in phone networks (very popular with Ericsson); but also now with WebRTC (which uses SCTP over DTLS over UDP). The main issue being that consumer firewalls usually only support NAT for TCP, and limitedly for UDP (this is one of the 'other' reasons to look forward to IPv6)


> they may arrive out of order, be duplicated, or not arrive at all!

This is first time what I read that datagram can be duplicated. Is it true? It's duplicated by network or does it mean that peer send it again?


Packets can be duplicated by faulty routers and loops in the topology. Happens all the time to me when I'm lost with iptables.


Observed this recently with a bug in Cisco's VSS. Some packets were being duplicated, which destroyed TCP throughput.


At network level, this is common for broadcast addresses. And also if a switch floods the packet across all its interfaces if it cannot make a routing decision. At app level, this is because the app decided to retry I.e app logic.


It can also be "duplicated" if you have some sort of acknowledge and retransmission on top of UDP itself. If the ack gets lost on the way back to the original sender it'll send another (probably) identical packet.


A classic example is FPV video from drone. There may be reflections in surroundings and you may get packets multiple times!


Right or a problem in something that could retransmit your packet again.. I did that in my office in the early 00s and took down a 100mbps lan from a few boxes retransmitting packets that weren't meant for them <eg>


It's very common for netflow data to be mirrored and multiplexed. Syslog, too.


Anyone here knows what protocol should be used for mobile multiplayer games , provided they are real time and not turn by turn ? (think people playing on a decent 3g connexion).

I think you can't send a udp paquet to a phone because the carrier will block it, but i'm not sure.


Yes an update on this topic for the mobile world would be very interesting.


I tried to build an UDP library once in C# with different methods (BeginReceiveFrom, ReceiveFromAsync, ReceiveFrom). You learn a ton and it's quite interesting. My goal was to recreate something similar to .NET remoting based on UDP.

But be aware that it's a daunting task, because there are so many things you need to handle all together: lost packets, reordered packets, duplicate packets, connection handshakes, session handling, reliable/unreliable channels, packet resends, random disconnects, reconnects, network congestion, spoofing, protocol hacking attempts, dos-attacks, banning, encryption, etc...

If you're writing an UDP library, you also need to think of performance, object pooling, connection buffers, threading/async issues and on top of that you also want to provide a nice API to the outside world for the client and server... Well, it gets messy...

If you're into this thing, I can advice you to look at haxe libraries. Learned a lot of them. There are very simple, idiomatic server/client-side implementations which are easy to follow, even if you don't know haxe [1][2].

[1]: https://github.com/svn2github/haxe/blob/master/std/neko/net/...

[2]: https://github.com/svn2github/haxe/blob/master/std/neko/net/...


Ah I love Glen's articles. Ive learned so much from them: game physics, integrators, game loops, game networking,..


Yes! His writeups on networked physics were extremely lucid and easy to follow. The way that he focuses on minimizing the bandwidth with bitpacking and goes from something absurdly large to equally absurdly small was a mesmerizing read for me as a programmer just getting started because of an interest in games.


Does anyone know if SCTP is suitable / in use by any games? It supports streams to work around the head-of-the-line blocking problem TCP runs into and it also supports opt-in unreliable delivery for game data. On the surface seems ideal for games, though I don't know if its getting much actual use.


Nobody who's in it for the money wants to be the first one to prove that SCTP can be deployed at scale. Application developers don't want to have to sort out getting a working SCTP implementation installed and configured on the proprietary operating systems they target, and past the firewalls of all the bottom of the barrel consumer routers.


You wouldn't use SCTP on top of IP, you would use on top of UDP and implemented in userspace. Browsers do it this way (in WebRTC data channels).


"TCP has an option you can set that fixes this behavior called TCP_NODELAY"

That fixes nothing. Now you are sending too many small packets using too many syscalls. Just like UDP, buffer in user space, send in one go. If you do that, TCP_NODELAY makes no difference. (The exception is user input, if you want to send those as they happen, use TCP_NODELAY, but think about the why ... it has little to do with what this article is talking about.)

Games likely send data only around 25 times per second, and ping is likely < 50ms. Waiting on a dropped packet and the delay it causes is unnoticeable. Added that clients will need some kind of latency compensation and prediction, independent of the TCP/UDP choice. Delays and then bursts of 100ms or such are doable.

The problem starts when the connection stalls for more than 100ms, especially in high bandwidth games. During the stall both behave the same. After the stall, TCP will be playing catchup and wasting more time receiving outdated data, and handing it to user space in order. UDP just passes on what is received, with a lot less catching up, and maybe some dropping of packets.

But gameplay has been degraded in both cases. UDP just has a higher chance of masking and shortening degradation more.

Anything more than that is basically cargo-culting, like this article.


I will preface this with the fact that the author worked on at least one such FPS game (Titanfall 2) and probably know what he is talking about.

He offered one solution to fix one behavior of TCP, the behavior being that if the data written to the buffer is not big enough, TCP might hold it. Setting TCP_NODELAY forces the protocol to avoid holding data. But this is only written as a note to coders who might think TCP_NODELAY will fix TCP for action games, but it doesn't actually, because the protocol has other characteristics that are undesirable in that type of games.

Moreover, you write this: waiting on a dropped packet and the delay it causes is unnoticeable. Unless you have a FPS game with TCP as its network protocol to validate this claim, I call bull. Many network programmers recommend against TCP. This is probably not simple cargo-culting.


"TCP_NODELAY forces the protocol to avoid holding data"

That is the misunderstanding. It will send one non full packet, and only then "hold on to data" until the next packet is a full packet. But that condition will reset when all data has been acknowledged. If you buffer in userspace and write data in one go, at 25 network fps, you basically never trigger the condition where TCP_NODELAY makes a difference.

It is not nagle's algorithm, but a variant of minshall-nagle, what modern systems use.

There is a reason why FPS games use UDP, but that discussion should not start with TCP_NODELAY. That one is misused enough ... often when it fixes anything, it is because it masks a real underlying problem, had you fixed that problem, your system would react much better under stress or bad networks.


This feels like a nice introduction.

Anyone here have any experience using QUIC in any application of their own?

The custom congestion control makes me wonder if it only works alongside TCP traffic - once everything goes QUIC, then what happens? I looked for a bit about the story in ancient history of some blazing fast server OS TCP implementation that broke the rules so it fell over when more than one server was on the network, but couldn't find it.


QUIC seems like it'd be kind of neat for videogame networking. Does anyone know whether one can get a QUIC channel to deliver datagram-style, as in unreliable and no retransmitting? It'd be really cool if you could set up one QUIC connection that could work across IP address changes and then use it for N reliable TCP-like streams and M datagram-style game update packet channels. If you could set up that sort of thing from JavaScript in a web browser, that'd be straight up amazing.


reminds me of this old article we had as assigned reading during a networking class in school.

The Internet Sucks: Or, What I Learned Coding X-Wing vs. TIE Fighter (http://www.gamasutra.com/view/feature/131781/the_internet_su...).


In short, TCP will work hard to deliver 100% of the packets. So when a packet is lost, TCP asks to re-send the packet. This is fine to display a webpage or send a file, but it can't be tolerated in games where time continuity matters. I think it's the same issue in VOIP and video conferencing too.


The reality is that these days there generally isn't any packet loss, so UDP vs TCP isn't such an issue as it might have been in the past. In fact TCP has a number of advantages these days such as easier firewall traversal, WebSockets, etc.


>The reality is that these days there generally isn't any packet loss

Do you have any sources/data on that? Genuinely interested where you drew that conclusion from. From my vantage point (anecdotal), ISPs and carriers routinely under-provision, congest peerings, and don't find a problem well until after a number of customers complain.


Bufferbloat means that at many potential bottlenecks, you don't get any packet loss until well after the point at which your latency-sensitive application (be it TCP or UDP) has given up hope. Major peering points are pretty much the only place where you won't see >1second queues when congested, simply because they can't afford buffers that deep for the rates they operate at. But for ordinary everyday occurrences, your congestion will be somewhere around the last mile, where you cannot expect packets to be dropped within a reasonable timeframe.


https://www.bufferbloat.net/projects/bloat/wiki/Introduction

Introduction

Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data. It is a huge drag on Internet performance created, ironically, by previous attempts to make it work better. The one-sentence summary is “Bloated buffers lead to network-crippling latency spikes.”

The bad news is that bufferbloat is everywhere, in more devices and programs than you can shake a stick at. The good news is, bufferbloat is now, after 4 years of research, development and deployment, relatively easy to fix. See: fq_codel: wiki. The even better news is that fixing it may solve a lot of the service problems now addressed by bandwidth caps and metering, making the Internet faster and less expensive for both users and providers.

The introduction below to the problem is extremely old, and we recommend learning about bufferbloat via van jacobson’s fountain model instead. Although the traffic analogy is close to what actually happens… in the real world, you can’t evaporate the excessive cars on the road, which is what we actually do with systems like fq_codel wiki.

Still, onward to the blow.

[...]


I measure packet loss on my cable connection every 5 mins of the day. Also Ive been developing realtime collaborative applications using tcp for over 20 years.


On the loopback port, sure.

I work on protocols that use sketchy wifi on mobile units that roam among access points. Sometimes they roam into RF black holes. TCP is all kinds b0rk3d for what I am doing.


>Sometimes they roam into RF black holes

I'm not sure how UDP is going to help if you have no connectivity.

These days TCP copes as well as UDP with temporary complete loss of connectivity, due to fast recovery. It is really just packet loss that kills TCP, but that isn't generally an issue these days if you're in the USA/Canada/Europe/Japan/Korea/Australia, and your wifi isn't crapping out.


> The reality is that these days there generally isn't any packet loss

I don't know which reality bubble you live in, but this is utterly false. I live in the country side and get routinely an average of 40% packet loss on many, many websites. There are plenty of occasions for packets to get lost somewhere between a client and a server.



That article is about network failures and misconfigurations. It's looking at reliability on a vastly longer timescale than the domain of congestion control operating on traffic from interactive applications. Basically all of the problems described in that article would simply result in a game dropping the connection, whether it was using TCP or UDP.


If you want to get familiar with TCP/UDP and are not gun-shy with C I would suggest the Pocket Socket Guide (now Practical Guide to TCP sockets)[1]. I have a really old edition, but it ages extremely well, and its one of the books I always use to refresh myself on network programming basics.

[1]: https://www.amazon.com/gp/product/0123745403/ref=ox_sc_act_t...


A huge problem with TCP wrt gaming is the default ACK frequency on Windows which is set to 2. This effectively almost doubles the latency of game connections (sending/receiving a lot of time-sensitive small packets).

It can be changed with a registry setting (TcpAckFrequency) but you can't expect even a significant fraction of your users to do that. Why this isn't a per-connection option sort of like TCP_NODELAY is beyond me.


I thought that it was somebody who has recently discovered the existence of UDP protocol and brags about that to the world, but from skimming the article, it actually has some non-trivial remarks about UDP and TCP.

BTW, the article's view angle is multiplayer game programming.


This guy (Glenn Gaffer) has been writing network protocols for games for years and shipped popular titles.

On his site you can also find an irate rant about people with no experience dismissing his claims as "reimplementing TCP". It has good points but it's a frustrated rant (reader beware). But it addresses a lot of the commonly believed fallacies.

His latest project is an open source library for game networking over UDP: https://github.com/networkprotocol/libyojimbo

And he is correct, for "real time" games you must use UDP or you will be in trouble in real world networking conditions. When everything runs smoothly, TCP and UDP work almost identically but when packet loss occurs, TCP will make things worse.


Glenn _Fiedler_ - Gaffer is his online name / handle.

Worked with Glenn on Freedom Force and Tribes. He knows his stuff!


There was a similar discussion in Hackernews earlier - https://news.ycombinator.com/item?id=7507377


What about UDT? I never tried it but always wondered about it's benefits/disadvantages.


Another important difference: UDP can't be used in the browser =(


WebRTC is UDP. It's not easy, but it's there!


nitpick: It can be either


It is possible to use only one or the other.


With the exception of the QUIC protocol.

* TLS over 443/udp


Well, neither can TCP.


HTTP is built on top of TCP.

Websockets are TCP connections.


So? TCP cannot be used directly in the browser by applications authors. It's an abstraction.


Couldn't you potentially use something like WebRTC to get UDP functionality in the browser?


That's a pretty obnoxious 'donate' button. They can't make it even larger?


The guy is providing valuable information for free, he can make his donate button as large as he wants.


Yeah, sure. He can put the Donate button all over the screen if he wants but then no-one would donate. I also think it's obnoxious.


Only if you donate more


I think tcp have an unfair reputation. Our networks are better now then 30 years ago ... Worst case latency for tcp is like 3 seconds, compared to the packet never arriving. The trick is to hide the lag with animations. I think google, facebook, and world of warcraft use tcp for their real time apps !?


> Our networks are better now then 30 years ago

You mean the networks in particular that you use? There's a lot of varied network equipment out there. Glenn wrote this article at least 8 years ago and for the types of games he works on (realtime 64-player online FPSes), I'm sure TCP still wouldn't work as well as a UDP solution.

> The trick is to hide the lag with animations

That's an understatement. A significant amount of work needs to be done with client-side prediction in order to give the appearance of playing in real time consistently with other players.

> I think google, facebook, and world of warcraft use tcp for their real time apps

Which real time apps are you referring to? In the context of this article, Glenn is talking about soft realtime systems (30 hz updates or more). There aren't any user-facing apps from facebook or google that I'm aware of that I would call "real time". For WoW, timing is less critical than an FPS and obviously TCP works well for them.


3 seconds is 2900ms too long, a game would typically rather lose the packet rather than have it delay all transmission for that long. World of Warcraft is a relatively simple game that as far as I know is heavily client authoritative so the server is mostly just serving content and telling other clients the result of your actions. In a game like Call of Duty, while they have some form of lag compensation, if you had all the players in your game freeze for 3 seconds while you shoot them the server will disregard your actions as impossible as the players weren't actually there. It would have been better for you to see the players slightly out of sync or have them teleport a short distance by just losing that one update and continuing to get the rest on time.


Google is no real-time VS a FPS game running @60hz.


Sad to see you being downvoted when you're absolutely right. Yes, UDP is better for a twitch game, at the expense of a lot more work. No, unless you're building a AAA mouse-and-keyboard shooter, you probably don't need more than a TCP connection sending data packets back and forth using a somewhat well-thought-out protocol.


I think the biggest mistake is only testing the app on localhost. While real world users can experience up to 150ms latency and fluctuations. It doesn't matter if you use TCP or UDP you still need a strategy for handling different levels of latency. For testing I use netem to add delay ...

The second biggest mistake is trusting the client. People never cheat in online games, right !? smile Network latency is often less then it takes to render, so naive solutions with simple broadcast servers work well until you need to deal with rouge clients using teleport hacks etc.

There are no set rules when making real time distributed and scalable applications like MMO Games. In my experience it's best to start with a naive demo/prototype to see where the bottlenecks are. I would suggest starting with TCP and then only switching to UDP, or more likely making your own protocol (TCP ontop of UDP smile), when you know the requirements.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: