Ah ok. So the question is more ‘does WebRTC need the complexity’. I think so, reliable transport is pretty useless without backpressure.
So here are the recent projects I have worked with that need the complexity.
* Teleoperation - Robot control went over reliable stream. Metadata goes over lossless stream, we don’t care about sensor readings 3 seconds ago.
* File Transfer - If you send too fast you will cause complete loss. You also need to measure constantly since link quality will fluctuate.
* Large Image Transfer - SCTP also handles breaking up messages to fit MTU. If you just send UDP Datagram you need to probe how big of packers you can actually send.
* Unreliable Large Image Transfer - If you don’t care if an image arrives and you lose packet 1 you shouldn’t send the rest of the Datagram. This was big for a project, saved a lot of bandwidth making sure we didn’t complete corrupted messages.
These are all extreme niche cases for WebRTC, aren't they? Niche, here, relative to actual usage, not "number of apps".
If the overwhelming majority of daily invocations of WebRTC protocols is due to simple videoconferencing applications, and all those invocations have to pay a complexity and security tax in order to potentially enable someone else's "teleoperation", that's a misallocation, isn't it?
Congestion control seems pretty important for videoconferencing, especially given that these are mobile apps that are especially prone to signal degradation/tower switching that will drastically affect throughput. I'd argue that being able to fall back to reliable text chat when your network is imploding and video/audio is unusable is a core feature of web conferencing tools.
TIL! That definitely makes the use case a lot more dubious.
Doing some googling, I'm wondering whether the original intent was to switch everything over to SCTP: data channels seem to have used the same protocol as media pre-2014. Maybe no one got around to it and things ossified?
So here are the recent projects I have worked with that need the complexity.
* Teleoperation - Robot control went over reliable stream. Metadata goes over lossless stream, we don’t care about sensor readings 3 seconds ago.
* File Transfer - If you send too fast you will cause complete loss. You also need to measure constantly since link quality will fluctuate.
* Large Image Transfer - SCTP also handles breaking up messages to fit MTU. If you just send UDP Datagram you need to probe how big of packers you can actually send.
* Unreliable Large Image Transfer - If you don’t care if an image arrives and you lose packet 1 you shouldn’t send the rest of the Datagram. This was big for a project, saved a lot of bandwidth making sure we didn’t complete corrupted messages.