I'm curious about how race conditions would be handled when multiple users, on different regional LiveView servers, take conflicting actions.
In the "Let's walk it through" section, it seems like the Player-to-LiveView connection will process user input (e.g. a Tic-Tac-Toe move) and update the UI to acknowledge this, at which point the user can be assured that the LiveView server accepted their input. But it seems like this happens before the GameServer has also accepted the input. What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?
Given, in Tic-Tac-Toe, the game is simple enough that this is neatly avoided: each regional LiveView server has enough information to only allow the current player to make a play. But in more complex applications, how might you (anyone; curious for discussion) handle this?
One answer is something like: The LiveView server is effectively producing optimistic updates, and the GameServer would need to produce an authoritative ordering of events and tell the various LiveServers which of the optimistic updates lost a race and should be backed out.
> What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?
Not sure I understand the question but, I don't see how this would happen.
On the BEAM it's processes all the way down. There's a process for that instance of the game, which is basically a big state machine, and 2 processes representing the client state, one for each player.
When the game (process) starts, it expects a message from player 1 (process), then one from player 2, and so on.
If there's a client timeout or network disconnection, the player process affected crashes, and if the app has been architected well, the other player process and game process are in a supervision tree, so they crash as well, perhaps notifying the other player that the game has ended because of a disconnection from the other peer.
But none of this will accept a move from the player 2 when it's player 1's turn.
This is very interesting - I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?
From the article, it seemed like there could be two players, each connecting to different LiveServer instances (on different VMs/hardware in different geographic regions) which in turn communicate async via one central GameServer.
In the article, it seems like a message from Player 1 to LiveServer 1 doesn't need to wait for the message to also reach the central GameServer and be acknowledged before LiveServer 1 acks the change back to Player 1. This seems to allow races, since the central GameServer is the source of truth but the Player1/LiveServer1 communication can complete a message/ack round-trip without waiting for acknowledgement from the GameServer.
I guess an alternative would be for the system to require a message from Player 1 to be passed to LiveServer 1, then passed on to the central Game Server which acks back to LiveServer 1, which finally can ack back to Player 1 -- this means that Player 1 would still need to pay full round-trip latency to LS1 and then to the GameServer for any action.
> The browser click triggers an event in the player's LiveView. There is a bi-directional websocket connection from the browser to LiveView.
> The LiveView process sends a message to the game server for the player's move.
> The GameServer uses Phoenix.PubSub to publish the updated state of game ABCD.
> The player's LiveView is subscribed to notifications for any updates to game ABCD. The LiveView receives the new game state. This automatically triggers LiveView to re-render the game immediately pushing the UI changes out to the player's browser.
So you can see that when Player 1 does an action, the action is sent to the GameServer. Player 1's UI is only updated when the GameServer has published the new game state via PubSub back to Player 1's LiveView process, that pushes it onto the client. So there is the latency of going from client to LV to GameServer and back again, but there is no race possibility.
> I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?
yes for example if u had a process named on a different Machine(Node called in Erlang) called "Alice", u could from a different Node send it a message using the Node Identifier as additional parameter example:
It's the same as with your mobile phone when you lost your wi-fi signal. Everything pauses and everybody has to wait.
Have you played games like HOMAM 1 or 2? You can't do anything when the CPU is playing its players. You can watch where he goes and what he does but that's it. When he is finished then you go.
When there is a network error - some message or please wait... or loading spinner message should be shown in the meantime.
For turn based RPG games or Chess etc. this is a non issue.
Of course, real time action games etc. is not a good idea for this technology.
Latency is the reason. Even in a turn based game it still feels really bad to make a move and have to wait for it to make its way through the round trip before seeing the result. In a game with strict ordering like Tic Tac Toe there is little reason not to show the chosen move immediately.
I mean, 100 ms between a click and a cross appearing on screen is not great user experience, but it's not even the worst. If you're writing a game, a little client side prediction is a good idea.
But if you have a form with instant validation, or any old regular UI, that is not necessary at all. The only built in optimistic UI functionality on Live View is disabling a button when you press it and wait for the server to respond, to avoid double submissions.
> But if you have a form with instant validation, or any old regular UI, that is not necessary at all.
Arguably because you're trusting the client and essentially the built or built-in behavior is therefore optimistic by default. Then hopefully validating on submission server-side.
From the tech talks I vaguely recall, LiveView folks seem to disregard latency, which is where the entire model falls apart for me because the moment you need more control on the client over what to do when the server is not responding - you’re entirely out of luck.
Though maybe I’m wrong and there has been some new developments to address this, I wasn’t following too closely.
On the contrary, LiveView documentation acknowledges this and suggests to handle such scenarios using client side tools:
There are also use cases which are a bad fit for LiveView: Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar.
Sorry, that’s not acknowledging that latency can become an issue, that’s acknowledging that using server-side rendering for things that don’t require a server isn’t the best of ideas (shocker, I know).
imagine an SPA for a basic CRUD system. there's a list view and details view with a delete button that returns you to the list.
in liveview server renders me the list view, i click details, server renders me details view, i click delete button, server renders me the list view.
if there's big latency/connection error/etc between clicking delete and getting back the rendered list - user just has to wait.
in spa i could optimistically assume that delete worked and render the list that i already have cached without the deleted item, allowing user to continue working immediately and if there was a disconnect/error - i could retry that delete in the background without bothering user, only prompting them after some number of retries.
don't see how could i implement this workflow in liveview.
You can do that in LiveView just as easily. Remove the item client side, then pushEvent to the server to handle the deletion. In case of any errors, notify the user, refresh the state .etc
pushEvent, pushEventTo (from client to server) [0]
> In case of any errors, notify the user, refresh the state .etc
so, "just use JS"?
every time you say "just use JS" you're diminishing the usecase of liveview because if i need so much js logic - why do i need to also use liveview if i can just use a framework/environment where i can share codebase between client and server seamlessly and have full control.
You stated don't see how could i implement this workflow in liveview.. I've presented you a way.
In case of any errors, notify the user, refresh the state .etc => This would all be done server-side and the client side would simply react automatically. The client side code in this case would have been minimal.
I don't think I'm diminishing anything. For quite a few years I was neck deep in React/Vue world. Now that I'm actively using LiveView, I can properly compare the differences between both approaches, cons and pros. For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).
You’ve presented a workaround and a hack tbh, not something natively supported because the workflow doesn’t map to liveview model, which is fine but you have to be honest with yourself and acknowledge when stuff like that happens, otherwise you’re in for lots of fun down the line.
> For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).
This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.
I wouldn't say it's a hack, but I would agree it's not the standard way to do things in the LiveView world, exactly because latency it's an overstated or misunderstood issue. But if you want to do more, LiveView gives you the tools.
> This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.
I still maintain some React/Vue apps and work with on a daily basis, so it's not a distant memory.
I like choosing the right tool for the right job. For example, I would still choose ReactNative over 2 different code bases for a mobile app for a small team that needs to move fast. For the 5% of cases when that wouldn't do you would need to go native. I see the situation similarly with LiveView. It's hard to beat its productivity & power in 95% of use cases.
I did a deep dive into liveview and this was my take away.
Its nice tech, but once you start introducing JS again to improve UX you really start asking yourself why you didn't just build it with react in the first place.
You could just delete the row with alpine or do a 3 line JS hook if you wanted to, it's quick and easy. That sounds a strange workflow though, it's generally better to make users wait for deletion.
isn't it funny that when you're trying to praise tech you like, all sorts of examples jump into mind, but when you try criticizing something you like - all that imagination vanishes and all existing examples can be dismissed as strange :)
I find it strange because that's not a behaviour I would use, but to each their own, it's the beauty of the web :).
If you really want to do it, you can add 3 lines on your project and that will work with any CRUD page you're building, I don't think that's unreasonable or difficult to do.
Edit:
Actually thinking about it, if you just made a form for that delete button and a phx-disable-with="" on the row it would probably work straight away without any JS hook.
surely you recognize that there is a gap in functionality between liveview and fully fledged frameworks that provide more granular control over ui interactions?
Like which other frameworks? You can code that example feature you pointed out quicker than in React if you want to. You have all the control you want in LiveView.
If you want a delete button per row, no code is needed and the phx-disable-with will work out of the box, if you want a global delete button on the top which deletes multiple rows front-end first before acknowledgement (with checkboxes + delete like in Gmail), 5 lines of JS maximum in a hook and you're set.
that you can't even acknowledge that there is a gap in functionality between liveview, a fairly opinionated framework for server side rendering and fully fledged client-side frameworks tells me this is not going to be a productive conversation, so i'm out, bye
Have you even used LiveView? It's not opinionated in any way, you can do whatever you want with it. It gives you extra features to remotely change pages but if you don't like having them you don't have to use them at all and can plug your favourite JS framework if you want to (or you can just use it for parts of the apps and not the rest if you want to).
I've worked for years with React and Angular and I don't really miss anything with a LiveView-based stack. LiveView features gives you 90% of what you want out of the box and for the rest it's fine having a bit of JS here and there to ensure a good experience.
What exactly are you thinking in terms of latency becoming an issue under LiveView but not on normal requests?
Do you mean when websites just need a full refresh because they lost their requests on some callback and no-one implemented recovery across the 5 levels of callbacks or something more specific?
Well with Liveview you go "full server state" for everything that you would normally just use plain JS for. For instance, toggling a checkbox or collapsing a div.
Having latency on such lowlevel interactions might make the UI feel sluggish as a whole.
Yeah certainly, but I'm not sure people are using it that way?
Most examples are to show cool stuff you can do, they're not production vetted. Like most JS examples out there don't really mean that people should be publishing live credentials with their bundles.
I imagine in most cases one would leave everything that is not behind a logged in status as normal routes/pages (signin, landing, contacts, etc). Or if not that those would be things requiring a socket/real-time interface anyway.
For the interactions, I don't think you even need to use alpine.js. Plain setup on DOMContentLoad, CSS Dropdowns/collapsibles that are replaced on JS load, proxying LiveView DOM/Morphdom events (if needed) so other components (even vue,react, etc) can listen to them, and CSS animations.
import { setup_live } from "./phx/setup_live.js";
import { setup_dropdowns } from "./interactivity/dropdowns.js";
import { setup_collapsibles } from "./interactivity/collapsibles.js";
import { setup_links } from "./interactivity/links.js";
import { setup_inputs } from "./interactivity/inputs.js";
function setup() {
setup_live();
setup_links();
setup_dropdowns();
setup_collapsibles();
setup_inputs();
}
document.addEventListener("DOMContentLoaded", setup);
I went as far as having `onclick` handles and global window functions. Complete heresy. Yes, it's not 100% JS free, but it's pretty much low overhead.
Then LiveView is mostly for your admin dashboards and logged in users views, where it makes it pretty easy to do real-time feedback type of interactions/views and spa like navigation. Since you have proper auth and identification on the user, you can just log them off, rate-limit, block an account, and close their socket if needed.
From the top of my head not really, but this would depend on a few things:
- Is it a vendor lib ?
- It's not but is some particular file that is big enough to not make sense including on the root layout?
- It's neither, but functionality that can trigger multiple times and should be only once and only on those pages because it can conflict? or some variation of that?
I think they're all solvable but what makes sense will depend on those, but also on how you're using LiveView (is it LiveView only for logged-in users/some auth, can you set those on the live_view layout...)
But in some cases this is a problem also in spa's, where you have to use a snippet to check if the lib has been loaded, if not add a script tag to the body, or load it through js, etc...
Possibly many LiveView tech demos / projects by the community haven't had much thought into latency, but LiveView itself even contains a latency simulator[0] built-in. Additionally, it can toggle classes on elements when you click them and turn them off again when an acknowledgment has been received from the backend [1]. Finally you have the JS hooks, through which you can just implement any kind of loading indication you want on the frontend. So the tools are there, they just need to be used.
One trick I remember using (~two years ago, so early LV) when handling click events was to put everything async/not needed to reply in a spawn() function.
But yes as soon as you're on the internet you'll often feel the delay if your app is interactive.
The problem is that it's a bit random, because the network and the VM performances are never totally linear.
I remember implementing a countdown (using 1s send_after()) that would work fine most of the time, but sometimes there would be some hiccup and the countdown would stall just a bit and then process the counter in an accelerated fashion, which was terrible from a UI point of view, so in the end I did it in JS except for the update once the end reached.
Work on challenging technology problems with a small, sharp, high-EQ team. Our product team is about 20 folks across the US, Spain, Norway, and Israel. Our tech stack is mostly Python/Django/DRF, React/Redux, and Backbone/Marionette. Our real time collaboration services are a combination of websockets, ShareDB, and WebRTC. We deploy to AWS with Terraform and Kubernetes (EKS).
This sounds really cool! What’s the company? (I also work on WebRTC-based classrooms, at Minerva - we haven’t looked outside of voice in the audio sphere though.)
Minerva Project | Software Engineer, UX Designer | Remote or San Francisco, CA | Full-Time
Come help build the Minerva Forum: push the boundaries of WebRTC and dynamic real time web applications in order to create a compelling education environment. You can see a video of Forum at work: https://www.youtube.com/watch?v=Gk5iiXqh7Tg
To learn more about who we are, our engineering culture, and whether this is the right place for you, read our Key Values profile: https://www.keyvalues.com/minerva
Work on challenging technology problems with a small, sharp, high-EQ team. Our engineering team is about 20 folks - 60% in SF, 30% remote in the US and 10% remote in Europe. Our tech stack is mostly Python/Django/DRF, React/Redux, and Backbone/Marionette. Our real time collaboration services are a combination of websockets, ShareDB, and WebRTC. We deploy to AWS - migrating from OpsWorks (Chef) to EKS (Kubernetes).
By integrating advanced classroom technology with research-backed pedagogy and curriculum, Minerva enables institutions of all types and sizes to improve learning outcomes for students around the world. Minerva also formed an alliance with Keck Graduate Institute (KGI) to establish the Minerva Schools at KGI in 2013, a WASC-accredited, four-year, undergraduate institution that provides an exceptional and accessible education along with an immersive global student experience.
Minerva Project | Software Engineer, UX Designer | Remote or San Francisco, CA | Full-Time
Come help build the Minerva Forum: push the boundaries of WebRTC and dynamic real time web applications in order to create a compelling education environment. You can see a video of Forum at work: https://www.youtube.com/watch?v=Gk5iiXqh7Tg
To learn more about who we are, our engineering culture, and whether this is the right place for you, read our Key Values profile: https://www.keyvalues.com/minerva
Work on challenging technology problems with a small, sharp, high-EQ team. Our engineering team is about 20 folks - 60% in SF, 30% remote in the US and 10% remote in Europe. Our tech stack is mostly Python/Django/DRF, React/Redux, and Backbone/Marionette. Our real time collaboration services are a combination of websockets, ShareDB, and WebRTC. We deploy to AWS - migrating from OpsWorks (Chef) to EKS (Kubernetes).
By integrating advanced classroom technology with research-backed pedagogy and curriculum, Minerva enables institutions of all types and sizes to improve learning outcomes for students around the world. Minerva also formed an alliance with Keck Graduate Institute (KGI) to establish the Minerva Schools at KGI in 2013, a WASC-accredited, four-year, undergraduate institution that provides an exceptional and accessible education along with an immersive global student experience.
Minerva Project | Full-Stack Software Engineer | Remote or San Francisco, CA | Full-Time
Come help build the Minerva Forum: push the boundaries of WebRTC and dynamic real time web applications in order to create a compelling education environment. You can see a video of Forum at work: https://www.youtube.com/watch?v=Gk5iiXqh7Tg
Work on challenging technology problems with a small, sharp, high-EQ team. Our engineering team is about 20 folks - 60% in SF, 30% remote in the US and 10% remote in Europe. Our tech stack is mostly Python/Django/DRF, React/Redux, and Backbone/Marionette. Our real time collaboration services are a combination of websockets, ShareDB, and WebRTC. We deploy to AWS - migrating from OpsWorks (Chef) to EKS (Kubernetes).
We'll have our KeyValues profile ready for next month's thread. (Hi @lynnetye - we are excited to be working with KeyValues!) In the meantime, you can read our Candidate FAQ Google Doc: https://bit.ly/2yuh5d5
By integrating advanced classroom technology with research-backed pedagogy and curriculum, Minerva enables institutions of all types and sizes to improve learning outcomes for students around the world. Minerva also formed an alliance with Keck Graduate Institute (KGI) to establish the Minerva Schools at KGI in 2013, a WASC-accredited, four-year, undergraduate institution that provides an exceptional and accessible education along with an immersive global student experience.
We deeply believe that education is critical to invest in, and get the opportunity to do this every day. We are transforming education at every level. We started with higher education, as that is the category that the world looks to as the pace setter in education, and are branching out into professional learning and high school. We truly believe education has the power to transform individuals and enable them to solve the world’s most critical problems.
Very cool! I assume this primarily supports testing one browser's rendering against itself, but do you see value in comparing snapshots across browsers? E.g. to surface a rendering inconsistency between Firefox and Chrome.
Cool project! Thanks for publishing and sharing it.
It'd be interesting to know what topic terms it produces for each of my repos. It looks like it's taking all the repo descriptions, producing a topic model over that corpus with a single topic (`LdaModel(num_topics=1)`), and retrieving the top N terms for that topic. Those topic terms will be the most frequent words from the topic, so I think this will end up producing the most frequent words from the cleaned token set.
I'd be curious to see what happens if you could run LDA over the full dataset, produce multiple topics, and suggest repos based on those topics. This would be a pretty fun extension to the project!
If you're just running LDA over the repo description (and not looking into the content of any file, e.g. README), might http://ghtorrent.org/ be able to provide this?
Or, it might be interesting to try producing a vector representation per repo by taking the description (and readme?), and doing something like: produce word vectors for each word, and sum the word vectors. https://spacy.io/ is a nice-to-use library that could help here.
Once you have a vector representation for each repo, using a distance metric cosine similarity could find related repos. Or (depending on the dataset size / performance) an approximation like spill trees or LSH forest.
In the "Let's walk it through" section, it seems like the Player-to-LiveView connection will process user input (e.g. a Tic-Tac-Toe move) and update the UI to acknowledge this, at which point the user can be assured that the LiveView server accepted their input. But it seems like this happens before the GameServer has also accepted the input. What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?
Given, in Tic-Tac-Toe, the game is simple enough that this is neatly avoided: each regional LiveView server has enough information to only allow the current player to make a play. But in more complex applications, how might you (anyone; curious for discussion) handle this?
One answer is something like: The LiveView server is effectively producing optimistic updates, and the GameServer would need to produce an authoritative ordering of events and tell the various LiveServers which of the optimistic updates lost a race and should be backed out.