Hacker News new | past | comments | ask | show | jobs | submit login

Articles like this make me happy to use Phoenix and LiveView every day. My app uses WebSockets and I don’t think about them at all.





Is this similar to Microsoft's blazer?

MS's web socket solution is called signalr.

It is also fire and forget with fall over to http if web sockets aren't available. I believe if web sockets don't work it can fall over to http long polling instead, but don't quote me on that.

All the downsides of web sockets mentioned in the article are handled for you. Plus you can re-use your existing auth solution. Easily plug logging stuff in, etc. etc. Literally all the problems mentioned by the author are dealt with.

Given the author mentions C# is part of their stack I don't know why they didn't mention signalr or use that instead of rolling their own solution.


>Given the author mentions C# is part of their stack I don't know why they didn't mention signalr or use that instead of rolling their own solution.

Not saying this is why but SignalR is notoriously buggy. I've never seen a real production instance that didn't have issues. I say that as someone who probably did one of the first real world, large scale roll outs of SignalR about a decade ago.


I believe it's changed alot since then? I was abit cynical of it from that initial experience but recent usage of it made it seem reliable and scalable.

To further clarify, I believe Blazor uses SignalR under-the-hood when doing server-side rendering. So the direct answer is probably "this is similar to a component used in Blazor"

Edit: Whoops, I lost context here. Phoenix LiveView as a whole is probably pretty analogous to Blazor.


You are correct.

I am using Blazor Server, and for some reason a server is not allowing Web Socket connections(troubleshooting this) and the app switches to long polling as fallback.


Phoenix predates Blazer, but they are both server-side rendered frameworks.

In terms of real time updates, Phoenix typically relies on LiveView, which uses web sockets and falls back to long-polling if necessary. I think SignalR is the closest equivalent in the .Net world.


Odd, I wonder why I got down voted for it, it was a genuine question

Blazor? Raw guess

The icing on the cake is that you can also enable Phoenix channels to fallback to longpolling in your endpoint config. The generator sets it to false by default.

hah seriously. my app uses web sockets extensively but since we are also using Phoenix, its never been source of conflict in development. it really was just drop it and scale to thousands of users.

Why couldn’t nodejs with uWS library or golang + gorilla handle 10s of thousands of connections?

I think GP's point is that they feel Phoenix is simpler to use than alternatives, not necessarily that it scales better.

to clarify, it does scale better out of the box. a clustered websocket setup where reconnections can go to a different machine and resume state works our of the box. its a LOT of work do do that in nodejs. I've done both.

This is using elixir right?

Yes.

I was thinking this exact thing as I was reading the article.

every websocket setup is painless when running on a single server or handling very few connections...

I was on the platform/devops/man w/ many hats team for an elixir shop running Phoenix in k8s. WS get complicated even in elxir when you have 2+ app instances behind a round robin load balancer. You now need to share broadcasts between app servers. Here's a situation you have to solve for w/ any app at scale regardless of language

app server #1 needs to send a publish/broadcast message out to a user, but the user who needs that message isn't connected to app server #1 that generated the message, that user is currently connected to app server #2.

How do you get a message from one app server to the other one which has the user's ws connection?

A bad option is sticky connections. User #1 always connects to server #1. Server #1 only does work for users connected to it directly. Why is this bad? Hot spots. Overloaded servers. Underutilized servers. Scaling complications. Forecasting problems. Goes against the whole concept of horizontal scaling and load balancing. It doesn't handle side-effect messages, ie user #1000 takes some action which needs to broadcast a message to user #1 which is connected to who knows where.

The better option: You need to broadcast to a shared broker. Something all app servers share a connection to so they can themselves subscribe to messages they should handle, and then pass it to the user's ws connection. This is a message broker. postgres can be that broker, just look at oban for real world proof. Throw in pg's listen/notify and you're off to the races. But that's heavy from a resources per db conn perspective so lets avoid the acid db for this then. Ok. Redis is a good option, or since this is elixir land, use the built in distributed erlang stuff. But, we're not running raw elixir releases on linux, we're running inside of containers, on top of k8s. The whole distributed erlang concept goes to shit once the erlang procs are isolated from each other and not in their perfect Goldilocks getting started readme world. So ok, in containers in k8s, so each app server needs to know about all the other app servers running, so how do you do that? Hmm, service discovery! Ok, well, k8s has service discovery already, so how do I tell the erlang vm about the other nodes that I got from k8s etcd? Ah, a hex package cool. lib_cluster to the rescue https://github.com/bitwalker/libcluster

So we'll now tie the boot process of our entire app to fetching the other app server pod ips from k8s service discovery, then get a ring of distributed erlang nodes talking to each other, sharing message passing between them, this way no matter which server the lb routes the user to, a broadcast from any one of them will be seen by all of them, and the one who holds the ws connection will then forward it down the ws to the user.

So now there's a non trivial amount of complexity and risk that was added here. More to reason about when debugging. More to consider when working on features. More to understand when scaling, deploying, etc. More things to potentially take the service down or cause it not to boot. More things to have race conditions, etc.

Nothing is ever so easy you don't have to think about it.


Did you try the Redis adapter that ships with Phoenix.PubSub (which Channels use)?

Yes, and I mentioned redis above. You need a message broker with 2+ servers, be it pg, redis, or distributed erlang. It's still more complex than a single server setup, which was my point. It's easy to say you don't have to think about things working when you have 1 server, it's not easy to say that past 1 server, because the complexity added changes that picture.

Ok but in literally any other language you minimally need this setup to do PubSub.

Elixir gives more options and lets you do it natively.

Also, there are simpler options for clustering out there like https://github.com/phoenixframework/dns_cluster (Disclaimer: I am a contributor)


Ok it was not clear if you used Redis directly with some custom code or the integrated stuff.

Anyway I agree that once you go with more than one server it's a whole new world but not sure if it's easier in any other language.


If only there was a way to send messages from one host to another in elixir

Did you even read my comment? I literally talk about how to do that, in multiple different ways.

Were the servers part of an Elixir cluster? Because that functionality should be transparent. It sounds to me like you had them set up as two independent nodes that were not aware of each other.

Same here. It truly is a godsend.

I came here to say exactly this! Elixir and OTP (and by extension LiveView) are such a good match for the problem described in the post.

I was kind of wondering how something hadn't solve this at all, compared to a solution not readily being on one's path.

Articles like this make me happy to use Microsoft FrontPage and cPanel, I don't think about HTTP or WebSockets at all.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: