I'm the guy that made Agar.io, Diep.io and a few smaller games. I analyzed the possibility of using WebRTC in my games several times so far, but it seems that right now, it's still hard to use in a server-client architecture. You need to bring this [1] behemoth and all of its dependencies to your project dependencies on the server side, even though you only care about a tiny bit of it (unreliable data channels). It's unlikely that people will start using it until there is an easy stripped-down version that only deals with data channels.
Hi, I'm a member of the WebRTC team at Google, and I wrote a lot of the data channel and network code that's part of that behemoth.
I'm glad you got this to work :). I've always hoped someone would do exactly what you've done (use the data channel code on a server to do low-latency client<->server communication).
We're aware of the difficulty and are actively working on making it easier to work with just he components you need for data channels (ICE, DTLS, and SCTP). It might take a while, but it's one of our goals.
Hey, I think you misread my comment, I haven't actually implemented it into any of my games. I've played with it for a bit but haven't gotten around actually adding it to any games. Sorry!
But yeah, I really hope it becomes easier to integrate, as right now that's the biggest barrier into putting it into my custom written C++ server that I use for all my games. They already support UDP-only communication for desktop and mobile builds and bringing it to web would make the experience a lot better. Thank you!
Indeed, I misread your comment. Well, hopefully in the future how I read the comment will be correct. I don't think it would be that hard to write a data-channel-only server using BoringSSL and usrsctplib. See a comment of mine further down on the page of how you could do that.
Or you can just wait until we have our code refactored :).
I also played with the idea of using WebRTC data channels for client server applications. From a system design perspective it looks good, since it can provide lots of concurrent flow controlled streams (like HTTP/2) with a potentially lower latency.
However from the pure engineering side things don't look so great for the reasons Matheus28 already mentioned: WebRTC (even data channels only) is a big stack of massive complexity. It's not something you want to implement or understand fully on your own within a reasonable amount of time (like you could do with WebSockets).
The most reasonable way to get WebRTC support seems like integrating Googles native WebRTC library. However one downside is that it's a big dependency with which a lot of people might be uncomfortable bringing in (although you say you are working on making it smaller). The other downside is that it's not only big but a native dependency, which I and other people want to avoid wherever possible in non C/C++ land.
The alternative solution would be to develop a pure Go/.NET core/Java/etc. WebRTC data channel solution. However for this most of the required subcomponents are missing. Imho neither of those even support the required DTLS encryption in the (extended) standard libraries, and there are also no libraries for SCTP on top of UDP around. Therefore getting this to work is a serious effort, and anybody who approaches it must ask himself if the effort is justified or if Websockets and HTTP streaming are not good enough. For the latter one the performance might even reach WebRTC data channel performance if QUIC gets standardized and widely deployed.
I think the situation might be different if WebRTC data channels would only have standardized a UDP or possibly encrypted UDP. Anybody who would have needed it could still be able to implement streams and multiplexing on top of it on server and client(JS) side. The current solution provides a nicer out of the box API, but supporting it outside of the browser is hard.
Using WebRTC for communicating game state seems like a hack. WebRTC is too complexed with many parts. I try to get into it every year or so but I don't even get the demos to work. Then compare to Websockets that are well supported and very easy to use, with fall-back libraries such as sockjs.
If possible I think it would be a good idea to break up the different parts of WebRTC so that they can work independently of each other. The abstractions are also a bit leaky, as you need to know about the underlaying layer to use it. Another approach would be having a low level API witch might be easier to implement in the browser, and then count on libraries to make good abstractions.
Oh, how I wish I could upvote this 100 times. Nay, 1000!
It was already bad enough 10ish years ago when it was a comparatively small pile of hacks, and there was hope that something could be done about it. But now? It's an enormous pile of hack upon hack. Full stack engineer? More like full hack engineer!
The main reason I worry about losing my job or moving to a new location is that web development jobs are a dime a dozen nowadays, while more traditional development is seeming less and less relevant. As much as I hate C++, I'll stick with it over the monstrosity that is Web 2.x.
[Insert the usual complaints about shitty languages, tooling, and gazillions of frameworks / reinvented wheels here.]
I can relate as I've been a web developer for 18 years and before that I did some qBASIC. Some aspects of web development have been stable for over ten years and feels like a foundation. Everything with computers is a "hack" though, the transistor is a hack, then everything on top of it is a hack. It's only when things get stable and has a tight abstraction things starts feeling non-hacky. But when something is stable (aka perfect) it no longer evolves and starts to get old. So you either use old stuff that worked 10 years ago and will work many years forward, or just accept that the bleeding edge technology you use today might be a bleeding mess tomorrow.
One thing I like about computers and programming is that it's all created by humans. I tried go into physics and biology but once you go deep nothing really makes any sense, it's all random. With programming there's always, most of the time, reason behind design decisions.
I don't have any problem using old stuff that worked 10 years ago. Our field is young, and there haven't been any major advances in the way programming is done for the last 25 years or so. Even if there had been major advances, I don't see anything inherently bad or wrong about oldness; conversely, I don't see anything inherently good about newness. Something can be 30 years old yet still be better than any of its alternatives, even in our field where obsolescence is a fact of life. I also don't think stable software is prohibited from continued evolution—obvious examples include the common open-source Unix-like OSs, many of which have been stable for years, yet have continued to evolve new features like loadable kernel modules, direct rendering interfaces, network firewalls, containers/jails/zones, nifty file system improvements, etc. Even Berkeley sockets were once introduced by an OS that was already old enough to be licensed to drive.
Also, I disagree that all of computing is built upon nothing but hacks. Computing is underpinned by lines of theory whose fundamentals can legitimately be described as elegant or even beautiful. I'm thinking of things like universal Turing machines, the lambda calculi, type theory, the structured programming theorem, theories of concurrent, parallel, and/or distributed computation, automata theory, computability theory, complexity/tractability, universal/abstract algebra, relational algebra, unification, etc., but the elegance doesn't end where the theory ends. Many people, including myself, would consider Lisp to be profoundly beautiful, for example, perhaps even on multiple levels. Whether you like the language or not, it was a crowning achievement of early computation science, and it is far from unique in that regard.
Although I personally loathe the state of Web development, I don't hate Web developers. On the contrary, I'm very glad that there's no shortage of people who seem to enjoy it—especially as a long-time Linux user, I'm glad that since the dawn of "Web 2.0", I've had to worry less and less about being left out because third-party developers decided not to support my OS: more and more, I can just pop open my Web browser and use the exact same software anyone would use on Windows or MacOS. It's a double-edged sword, for sure, since along with the convenience and compatibility, browsers have become insane, bloated resource hogs, and if I'm not connected to the Net, there's a chance I won't be able to use the software I want or access the data I want. On top, philosophically I can't help but feel that XaaS for various values of X and "the Cloud" are regressions back to a time when personal compute power was prohibitively expensive, for reasons that are billed as convenience but in reality only serve to remove the freedoms of end users; notwithstanding, I'm just going to focus on the technological issues that I perceive for now, since the philosophical issue(s) demand a different class of solution altogether, nor do said issue(s) belong uniquely to the Web.
I suppose most of the issues I have with the Web as a software platform stem primarily from one force: organic growth over the course of two decades, as opposed to thoughtful and deliberate design by an engineer or group of engineers. The way the Web is now, especially when viewed as a software delivery and execution platform rather than as a document delivery platform, it's a Frankenstein's monster that has been pieced together from numerous disparate protocols and data formats all designed by different people, revised by still more different people, and oftentimes extended by yet more different people in order to cover use cases that had not been considered by the original designer(s), and then connected in the most straightforward ways possible, where each connection might consist of an entirely different mechanism than any other (rather than, for example, extending the protocols such that they provide a uniform connection mechanism).
However, I don't think organic growth on its own necessarily leads to monstrosities. I think that the force of organic growth has been guided by a couple of factors, similar to how evolution is guided by various forms of selection. For one, throughout the history of the Web, the goalposts of its continued development have moved time and again. Once a system for the distributed service of hypertext documents, it quickly became a service for hypermedia in general. Then it became a service for interactive trinkets. And it quickly became a service for commerce and enterprise. With the advent of Java applets and ActionScript-programmable Flash "movies", it became a service (but not yet a platform) for the delivery of applications. Then, of course, AJAX sparked a fundamental change in how the Web was viewed by developers and users alike: it finally became not only an application delivery service, but also a software platform! Since then, the goalposts have shifted only slightly, and the majority of these goals can be summarized as a desire to further enrich the software platform, first by doing the things Flash was once used for, then the things Java was once used for, coming to the point where there is a desire for a Web page to be able to do the things native desktop applications are typically used for, including even AAA games. For each set of goalposts, the context of the design of new Web technology has been different; as such, the notion of what has constituted a "good" design decision has also changed: sometimes, what was a good design decision at the time became not-so-good in a new context. The result has been—rather than a clear progression towards a single goal—a bunch of tumorous outgrowths in various directions with a line of best fit trending from "hypertext document service" to "hardware and operating system abstraction layer and virtual machine for shitty, poorly-performing, inefficient, possibly-distributed applications". The curious "architecture" of the Web is reflected in the architecture of Web applications: the number and complexity of the technologies that are needed to create even the most basic Web application is, frankly, ridiculous. And on top of it all, where platforms with like goals such as the JVM and CLR manage to provide first-class support for multiple programming languages, the Web manages to offer only one, and it happens to be particularly grimy (my fingers are crossed for WebAssembly).
The lesson of all this (and it's not a lesson unique to the Web by any means), is this: backwards compatibility is a bitch.
tl;dr all these young punks need to get off my lawn
Ive built some multiplyer real time web game prototypes using websockets with sockjs fallback and canvas 2d context. Its very fun and productive. You dont have to know opengl or tcp ip and it works everywhere, even on a 5 year old Nokia.
I've been trying to find some numbers on what kind of performance characteristics for using WebRTC unreliable datachannels (compared to the spectrum between pure UDP vs WebSockets). Do you know of any such comparisons? Or any intuition on the matter?
Because SCTP is configurable, we can open multiple data channels with different settings, which saves us tons of work!
Typically we have:
an unreliable and unordered data channel for game state data and user inputs
a reliable and ordered data channel for chat messages, scores and deaths
Also WebRTC is inherently p2p . So you can connect directly to the other peer without server getting involved expect for connection (STUN and TURN). Basically message via the LAN even. Latency improves in many scenarios.
Latency has very little to do with the p2p aspect. In a client-server setting, the use of RTP/SCTP to communicate payloads between two clients would have significantly less latency than, say, a P2P TCP connection.
Haha, whoa, I feel like I've spotted a celebrity! I mentor a kid here in rural Missouri and he and his friends play agar and diep. I don't know how they found them way out here, but they did and they love them (especially diep).
Anyway, I've been idly thinking of ways to try to kindle an interest in programming in the guy, since he's very smart and seems to like computers. I played Cookie Clicker with him, and then showed him how to "hack" it from the console by playing with the JS a little bit (it's all local).
But since he loves diep so much I think that would really give him motivation to learn more if he could fiddle with things in that game somehow. But I realize it's harder given that it's a client/server model. I don't suppose you have any ideas of things we could try? Is there a client-only mode where we can fiddle in the console? Or is the server code open source so I could run it locally or something?
Oh, ha, and something that's been killing me... why's it called "diep"?
For Diep.io, it's completely server side and very little happens on the client side. That is intentional as Agar.io had a problem with "private servers" popping up which were actually people ripping (read stealing) the client side code, putting their ads in there, hosting it on their own website and pointing it at their server emulator.
It comes from an old game I made when I was a kid called Diepix. There's no reason for the name "Diepix" other than sounding cool.
This could solve this diep.io monthly updates issue. I really hope you allow more devs.
And also (if you dont know what discord is, discord is a website where people can chat live), Join the Diep.io discord made from the moderators of the Diep.io subreddit: https://discordapp.com/invite/YDSF2wD#discordbutton
I have not seen the game but to me it ushers upp images of going into a cave or a hole. Diep actually does mean _deep_ in my native language. Thanks for clarifying where the name comes from :-)
Care to share a bit about technical side of agar.io? I've played it a bit (yeah right, a bit..) and I've been wondering how did you manage to make it so smooth without also introducing conflicts between clients (I mean e.g. on my side B ate A but somebody else sees C eating A). There is lag sometimes but I've never encountered "glitching" where position calculated locally would have to be visibly adjusted to what came from the server.
Also if it's not a secret what backend did you use to handle so many concurrent connections and how many were you able to keep per box?
So basically cell won't be eaten/exploded by virus until server says so? I would have thought that would be perceived as too slow.
Per server you mean like in a game room or is one game room equal to one linux box? If so, I guess then that handling the game logic was the bottleneck, not the number of concurrent connections?
Also, congrats on the success and making some really cool games.
Per game room (each room is a process). I end up just using boxes that have 1 CPU core and run just that game room in there. Except for some dedicated servers that have 40+ cores, in which we run 40+ processes.
On Agar.io doing all the collision checking and encoding the packets is the biggest bottleneck. Similarly for Diep.io. Number of players of course increases those two factors almost linearly. For example, Diep.io doesn't process shapes that aren't being transmitted to anyone.
At first I tried checking every creature for collisions against everything else, but unsurprisingly that was too slow (N^2). To reduce the checks I put each creature in a grid cell based on their position, then check for collisions only against creatures in the same or adjacent cells.
I think overlapping grids would be even more efficient, or perhaps to do these checks on GPU.
How much of a gap is remaining until stripped-down data channels libraries like librtcdcpp (C++) [0] and librtcdc (Python) [1] could be suitable for production? Is anyone here using one of these in production today?
I haven't tried either of those, but they headed the right direction (other than the Boost and that log4cxx dependency, I'm very picky about what dependencies I bring into my projects).
Hi! Nick here, author of librtcdcpp - mind elaborating on why you don't like log4cxx? Boost I get - I'm only using one or two easily-replaced headers, and would like to remove it next release :)
Hey! I'm just a bit opposed to having logging libraries in libraries that I'm using. I mean, if they're only needed for debug builds of the library I'm fine with that, but if I already have a logging library, bringing another into my project would be a bit unnecessary. Thanks for the reply!
Hi! Thanks for linking my project - I'm currently trying to make librtcdcpp production-ready after having many issues with librtcdc. Patches very welcome!
I think calling WebRTC a behemoth is a bit of an understatement. Excuse my sharp-tongued criticism, since I understand a lot of hard work has gone into it, but WebRTC has no regard for interoperability. As of now, it is nothing more to me than a VPN deanonymizing exploit.
WebRTC leaks true IP addresses unless it is outright disabled in [supported] browsers. It is a huge annoyance, and I would be hard-pressed to view it as more than a gimmick that complicates the already messy landscape of web development.
Do you adapt the amount of data sent based on the bandwidth? If so, how?
To me the best way to do this involve knowing how often the TCP socket is doing retransmissions, which is an information typically not available at the WebSocket level.
Just WebSockets in a C++ server. I have my own implementation that I wrote from the scratch because when I started µWS [1] didn't even exist yet. I recommend their implementation for anyone planning on implementing a websocket server in C++.
We've been using WebRTC Datachannels for multiplayer gaming in the browser in our game editor Construct 2 (www.scirra.com) for a couple of years now. Generally they work great! However the main problem we have is switching tab suspends the game, which if you're acting as the host, freezes the game for everybody. This is really inconvenient. There ought to be some way to exempt tabs with WebRTC connections from being suspended. I filed a bug for it here: https://bugs.chromium.org/p/chromium/issues/detail?id=676036
Wait can you do a pop out window? That way it's always open and can't have multiple tabs. The main tab can control the pop out to kill the data channel ...
Have you tried popping up a message in other browsers saying 'userBob switched browser tabs, disabling his host. Please contact userbob to switch back and always remeber if you are the host, dont switch tabs.' Then it would only happen when people were trying to be hostile or negligent.
Also have you tried with a webworker or something similar.
Exactly, the appeal of P2P is you don't need to run servers. And it seems like a reasonable feature to allow browsers to reliably host multiplayer games.
When I started development WebRTC wasn't very well supported, now I am considering using a hybrid. I already use two websockets, one for binary state snapshots and the other for JSON important updates like entity creation and chat. It would be interesting to implement WebRTC to my servers just for the state snapshots.
It's hard to say this. On the one hand, they're totally missing out on extra features like WebRTC. On the other hand, they're the only browser with 100% ES2015 shipping and 100% ES2016+ complete in dev.
You can beat on them for missing the bonus questions, but they're the only one answered all of the questions on the main exam correctly and they were the first ones done...
I just remember it was a nightmare to get it working in both chrome and firefox, the tutorials were outdated, the official docs at w3.org were wrong (suggested settings would break the app), error handling was not consistent, lots of race conditions, API was too complex and awkward, etc.
I've been using https://github.com/keroserene/go-webrtc for a couple of prototypes, it's not pure-Go but bindings against the C++ implementation but it works (what little of it that is exposed).
By the way, I work on the WebRTC team at Google, and I don't think it would be that hard to write a data channel server in Go. Here's how you should do it.
1. Get an "SDP offer" from the client. Parse it mainly to get the DTLS fingerprint. You may also choose to get the SCTP max message size.
2. Open a UDP socket. Listen for incoming STUN binding requests and send back binding responses:
3. Once a valid STUN binding request is received, listen for DTLS packets and hand those over to BoringSSL. Also listen to when BoringSSL wants to send a packet and send those out on the UDP socket back to the client.
4. Once BoringSSL finishes the DTLS handshake and is processing incoming SCTP packets, listen for those and hand those over to usrsctplib. Also listen to packets usrsctplib wants to send and hand those over to DTLS to send.
5. Process data channel messages from usrsctplib and send data channel message through usrsctplib. Note that you can ignore the whole "OPEN message" protocol if you call PeerConnection.createDataChannel with the "negotiated: true" option on the WebRTC client side. You can specify the SID you want to use for the data channel as well.
6. Serialize an "SDP answer" message which basically just hands back the same looking blob of text, but with a different DTLS fingerprint (the one for the certificate of the server). It also must have two random strings: the ICE username fragment (ufrag) and ICE password (pwd). If you pass that answer to the WebRTC client, all the ICE, DTLS, and SCTP work should happen and after around 6 round trips on the network, you should incoming and outgoing messages.
Ok, that probably sounds like a lot, but most of the work is done by BoringSSL and usrsctplib. The bulk of the Go code would be implementing the STUN messages and gluing everything together. Good luck to whoever tries :).
Not until server-side implementations exist. Depending on the physical location and amount of data being replicated, you'll need the flexibility to move master-game simulation logic from dedicated server to shared client. WebRTC works great if you're doing p2p among browsers, which severely limits the kinds of multiplayer experiences you can create.
I love WebRTC and think it's great. I studied and read all related RFCs and tried implementing a stack in JavaScript for Node.js. Got stuck on DTLS, then switched jobs.
I do think WebRTC is "the future," but we don't seem to be moving AT ALL towards that future. Having multiple implementations would move the needle. Otherwise, there was this big leap forward when it was developed and added to browsers, but not much since.
This paper always baffled me in terms of why it was significant. Basically, TCP is just nasty to ALL other transports passing through the same node/pipe (including other TCP). I don't know why, a) this is a surprise or b) only UDP is singled out as an affected transport in this scenario.
I think WebRTC is secure by default and everything is supposed to be encrypted and served over https...Chrome for example doesn't allow unsecure usage of the API's. See this for more: http://webrtc-security.github.io
I'm still blocking it with https://github.com/ChrisAntaki/disable-webrtc-firefox because it leaks the local IP addresses of the client over VPNs. Yes, addresses because there can be many of them at the same time, so it's useful for fingerprinting.
> I really don't know why the local IP addresses must be known to the server, maybe there are good technical reasons, but it's not OK.
So they can pair you with other clients sharing a local network with you? This allows the client to talk peer to peer over their LAN which is extremely valuable.
The public address could be enough for the server to realize there are two browsers from the same LAN. Then it tells the browsers to look for each other.
I'm not really sure that it is practical with large LANs (a /16) but it doesn't look impossible. If they design for privacy they could develop an algorithm that works.
And if I connect with a VPN I probably don't want to bypass it. I've got customers that accept unconveniences to protect their network with VPNs. Should browsers break it with WebRTC? I didn't think so. It's a vulnerability that I keep blocked.
> The public address could be enough for the server to realize there are two browsers from the same LAN. Then it tells the browsers to look for each other.
> I'm not really sure that it is practical with large LANs (a /16) but it doesn't look impossible. If they design for privacy they could develop an algorithm that works.
Not even a little bit practical. Most big lans disallow broadcasts, or limit them in various ways. The only way for peers to find each other is to know each other's private IP address. And private IP addresses contain nothing of value.
Now if you're using a VPN which you intend to use to protect your privacy, well... There are countless of other problems with that if you just turn that on, use the same software as always, and think you are now 'anonymous'.
> And if I connect with a VPN I probably don't want to bypass it. I've got customers that accept unconveniences to protect their network with VPNs. Should browsers break it with WebRTC? I didn't think so. It's a vulnerability that I keep blocked.
It's not a vulnerability. It is a very, very useful feature. You just happen to think that enabling a VPN will magically give you a privacy shield, which is misguided at best.
In terms of plain and direct security holes in WebRTC, there is an extended attack surface. But your browser has this enabled anyway, no matter what you as an application developer or end user do.
In terms of security properties delivered to the application/end user, there are security and privacy advantages in end-to-end communication instead of involving servers. But it all depends on your use case and threat model.
Just read the docs. WebRTC data channels are SCTP which is built on DTLS (datagram transport layer security) which uses the identical algorithm as TLS for encrypting traffic. It's as secure as TLS is, and hopefully you're using 1.2 or so.
WebRTC Datachannels are awesome, I've always thought they could be leveraged for efficient peer to peer gaming but this is definitely interesting as well.
Getting started with webrtc datachannels is easy and you can even have your server in Python Flask, but keep in mind you'll have to handle multiple concurrent connections.
The data channel seems like one of the most under-utilized features for WebRTC. On our platform we've seen a handful of companies develop RT games and applications. Nice to see people developing applications other than AV applications.
Going to also shamelessly promote our webrtc platform www.temasys.io It has very strong support for data channel, and socket messaging.
If you've designed your protocol, such that you're sending packets smaller than 1.5k (or better 572 bytes) then over a really good connection, TCP will perform like UDP, or close enough in any case. With modern 1st world broadband (which doesn't include all of the US, unfortunately) you just want UDP to minimize that fraction of the populace that gives you a 1-star rating and says your game sucks because their network sucks.
The big difference comes in with packet drops. TCP, with its delivery and order guarantees, will cause huge outliers in latency. Even when those outliers are rare, they will make your game look completely janky. With good broadband, those outliers will be very, very rare, and the game will look good.
It's not just packet drops and MTU, you also have congestion control, Nagel algorithm and a host of other things working against you in a TCP connection.
You're trying to turn a stream protocol into a message based protocol and you're going to experience an impedance mismatch if you try to use the wrong protocol for your use case.
You're trying to turn a stream protocol into a message based protocol and you're going to experience an impedance mismatch
Definitely. But the reality is that it won't bite you for the part of your audience that's on premium broadband and corporate networks. Conversely, it's going to really really suck for people in crowded apartments on WiFi and bad connections of all kinds. I know this because I'm operating an MMO server on Amazon AWS right now.
Note that signaling for WebRTC is still recommended to be sent over HTTPS! Signaling (before the p2p connection is fully negotiated) can be eavesdropped if not done properly.
Have experience using it as a VoIP app. The main issue with it, is that there's constant regressions, with no real stable version.
For example if a new Android device comes out you can run into issues regarding gain and echoing, they also maintain a list of pecular devices which don't strictly adhere to standards (Mostly Samsung devices). So you end up having to upgrade to a later version. Which ends up causing regressions in many other devices which previously had no issues.
I am ofcourse talking about Google's WebRTC. The best thing to a stable version, is the commit hash used in chrome, but they have just as many regressions as any other version. Building it is also a bit of a pain, although no where near as bad as it used to be, but once it's set up it does the job quite well.
Interesting, I'm aware of the hassle to build it, but constant regression...that means it's not production ready yet. Regarding the same regression in Chrome, does it mean the WebRTC in stable version of Chrome also has the same compatibility/gain/echoing problems with new android devices?
Yeah, there are frequent regressions in Chrome. Although Ive never had personal experience of them in the browser, but Ive used the same commit hash, and tickets Ive come across also tend to refer to the browser. Most of the issues I have come across are device specific.
Sadly I believe its as production ready as its ever going to be, they seem to be taken the motto of "move fast and break things". The issue isn't the project, its the carefree way that its being handled.
WebRTC is a technology. It is not an application, and cannot replace applications.
In the cases of some of those applications, it could conceivably be used to implement future versions of those applications -- but that certainly doesn't equate to "killing" them.
I wasn't saying that it would kill them out of the box. I am thinking that those application rely of a heavy dose of proprietary technology in order to pull off their magic, and WebRTC is going to enable countless competitors to arise with much less investment in tech.
In the long run, this tends to kill off established applications.
Sorry. You are absolutely missing the point of my posts.
I know it can feel good to tell people they are wrong, but you are off the mark.
The point was WebRTC could kill these technologies, all of which are proprietary or one-trick ponies. At least that was my question. I was looking for what others thought on the topic. I clearly was not trying to convince anyone IRC was purely proprietary, and anyone who would read what I wrote and would think that was clearly looking for a fight. I'm not interested. Move along.
When a game server connects to the master to be registered, what the master gets is the the IP address relatively to it. If this address happens to be accessible from a client, everything's fine. However in some cases, it is not. For example in the case described where 127.0.0.1 is the game server's address (relatively to the master), if the master sent the address 127.0.0.1 to a client, it wouldn't connect.
So we need to find out the public IP address in some cases, but sometimes it is better to use the local IP address (for example if the master, the game server and a client are on the same network and the client is not allowed to access the internet).
So basically finding out which IP address the master must send to the client is a huge pain. WebRTC does it automatically (to traverse NATs).
And the only platform that works on my windows desktop, macbook, work Linux machine, Android phone, iOS tablet, Xbox, TV, and watch with an "install time" measured in seconds, a top of the line security sandbox system that takes very little effort from the user to configure correctly, a very visible badge telling you if the connection is encrypted or not, and an incredible amount of easy customization by the user which is so powerful it can completely change how an app looks with zero input or permission from the app creator.
Reliable low latency transmission of audio and video is also one of those things that looks easy unless you try to actually get it working in practice.
I think the reason is that if we ever get a real solid web platform it will start competing directly with all of the vendors proprietary operating systems. So there are billions of dollars worth of reasons to maintain just enough incompatibility and spotty featuresets.
I mean I have been waiting for two decades for it to happen.
I think there's room for discourse in this area TBQH. There are many alternative terms that would communicate the purpose of the framework and its class members just as well, if not better than something with 'slave' or 'enslave' in the name. Especially if you want to potentially attract adoption among users who might have ancestors who were enslaved, and also incidentally have a harder time getting a job in tech. I know I for one would like to use a software package that doesn't also remind me of a painful memory.
I have family in Suriname. They would laugh in your face for this argument. Shit happened, we're past it. Pretending it never happened and erasing anything that could "trigger" someone is the wrong way to deal with the shit that happens in life.
Using unreliable or out-of-order data for game state sounds hard and difficult to test, no? Are there code level patterns that help reduce the complexity while keeping the possible performance benefits?
If your entity state changes ("move player 1 to 3,4", "open door 3") are dropped/switch order, the logic no worker works. This way the unreliability seems to leak directly to game logic/scripting layer.
On the face of it the complexity cost seems high for supporting bad network connections. A normal TCP connection rarely sees losses or reorderings. There are exceptions (twitch games) of course.
Somehow networked shooters manage to leverage it. They probably only use unreliable messages for the twich mechanic hot-path which is heavily analyzed, iterated and tested anyway.
[1] https://chromium.googlesource.com/external/webrtc