Hacker News new | past | comments | ask | show | jobs | submit | more jayunit's comments login

The “third part” of the post starts with “I’ve benchmarked cola against 3 other CRDTs implemented in Rust: diamond-types, automerge and yrs.” This cola library appears to perform favorably in operation speed.

I’d be curious to know about memory usage, too.


Seems like cola doesn't support pruning so I'd be worried about the performance when working with files with lots of edits.


Congrats on the release! Having built several LLM apps in the past months and embarking on a couple new ones, I’m excited to take a look at Langfuse.

Are there any alternatives you’d also suggest evaluating, and any particular strengths/weaknesses we should consider?

I’m also curious about doing quality metrics, benchmarking, regression testing, and skew measurement. I’ll dig further into Langfuse documentation (just watched the video so far) but I’d love any additional recommendations base on that.


SEEKING WORK | San Francisco, CA or REMOTE | Consultant / Principal Software Engineer / fCTO | LLMs, Python, React, Full-Stack & Product Engineering, DevOps

Building generative AI and want to get the most out of it? Thin LLM wrappers are fine short-term plays but they aren't defensible - let's put together a product strategy that takes advantage of your unique process, data, or systems - and then build it.

Together with my business partner, we have advised founders/leaders and built LLM products across education (scalable personalized content authoring, AI tutoring/coaching, AI evaluation and feedback), general productivity, marketing tech, and healthcare.

We are exploring technical feasibility for a few startup ideas and are open to part-time work up to 20h/wk to aid our (currently bootstrapped) runway.

Technologies: LLMs (closed and open), Python (Django/FastAPI/Scikit-Learn), React (Next.js/Redux/etc), Devops (AWS/K8s/Docker/serverless/CDNs), RDBMS (MySQL/Postgres), Realtime (CRDT/OT/Websocket/WebRTC).

Remote: Yes, I have worked on and led a global distributed product & eng team since 2016 and am very comfortable with remote.

Willing to relocate: No

LinkedIn: https://www.linkedin.com/in/jasonpmorrison/

Website: https://jasonpmorrison.com/

Email: jason.p.morrison@gmail.com


You might enjoy https://www.seriouseats.com or The Food Lab by J. Kenji López-Alt.


Strong agree!

For JavaScript, I suggest folks check out fast-check [0] and this introduction to property-based testing that uses fast-check [1].

This is broadly useful, but one specific place I've found it helpful was to check redux reducers against generated lists of actions to find unchecked edge cases and data assumptions.

[0] https://github.com/dubzzz/fast-check [1] https://medium.com/criteo-engineering/introduction-to-proper...


This is great! Super fun to play. Congrats on releasing it!


Can't pass up a chance to share the delight that is (was?) PizzaTool: https://donhopkins.medium.com/the-story-of-sun-microsystems-...


Very cool! Thanks for sharing. Was binding Monaco particularly challenging?

I’m curious what the larger project/product is, if you are able to share.

I’ve used ShareDB (from @josephg in this thread) for a collaborative coding project with Jupyter as the execution backend. I just geek out seeing OT/CRDT projects in the wild :)


I'm curious about how race conditions would be handled when multiple users, on different regional LiveView servers, take conflicting actions.

In the "Let's walk it through" section, it seems like the Player-to-LiveView connection will process user input (e.g. a Tic-Tac-Toe move) and update the UI to acknowledge this, at which point the user can be assured that the LiveView server accepted their input. But it seems like this happens before the GameServer has also accepted the input. What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?

Given, in Tic-Tac-Toe, the game is simple enough that this is neatly avoided: each regional LiveView server has enough information to only allow the current player to make a play. But in more complex applications, how might you (anyone; curious for discussion) handle this?

One answer is something like: The LiveView server is effectively producing optimistic updates, and the GameServer would need to produce an authoritative ordering of events and tell the various LiveServers which of the optimistic updates lost a race and should be backed out.


> What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?

Not sure I understand the question but, I don't see how this would happen.

On the BEAM it's processes all the way down. There's a process for that instance of the game, which is basically a big state machine, and 2 processes representing the client state, one for each player.

When the game (process) starts, it expects a message from player 1 (process), then one from player 2, and so on.

If there's a client timeout or network disconnection, the player process affected crashes, and if the app has been architected well, the other player process and game process are in a supervision tree, so they crash as well, perhaps notifying the other player that the game has ended because of a disconnection from the other peer.

But none of this will accept a move from the player 2 when it's player 1's turn.


Thanks for your reply!

This is very interesting - I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?

From the article, it seemed like there could be two players, each connecting to different LiveServer instances (on different VMs/hardware in different geographic regions) which in turn communicate async via one central GameServer.

In the article, it seems like a message from Player 1 to LiveServer 1 doesn't need to wait for the message to also reach the central GameServer and be acknowledged before LiveServer 1 acks the change back to Player 1. This seems to allow races, since the central GameServer is the source of truth but the Player1/LiveServer1 communication can complete a message/ack round-trip without waiting for acknowledgement from the GameServer.

I guess an alternative would be for the system to require a message from Player 1 to be passed to LiveServer 1, then passed on to the central Game Server which acks back to LiveServer 1, which finally can ack back to Player 1 -- this means that Player 1 would still need to pay full round-trip latency to LS1 and then to the GameServer for any action.

Thanks for any light you can shed on this!


Here's the relevant part from the article:

> The browser click triggers an event in the player's LiveView. There is a bi-directional websocket connection from the browser to LiveView.

> The LiveView process sends a message to the game server for the player's move.

> The GameServer uses Phoenix.PubSub to publish the updated state of game ABCD.

> The player's LiveView is subscribed to notifications for any updates to game ABCD. The LiveView receives the new game state. This automatically triggers LiveView to re-render the game immediately pushing the UI changes out to the player's browser.

So you can see that when Player 1 does an action, the action is sent to the GameServer. Player 1's UI is only updated when the GameServer has published the new game state via PubSub back to Player 1's LiveView process, that pushes it onto the client. So there is the latency of going from client to LV to GameServer and back again, but there is no race possibility.


> I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?

yes for example if u had a process named on a different Machine(Node called in Erlang) called "Alice", u could from a different Node send it a message using the Node Identifier as additional parameter example:

[coolest_node | _rest_of_nodes] = Node.list()

Process.send({Alice, coolest_node }, :hi)


Ah, that's indeed a good question, though those are implementation details of Fly.io I'm unaware of.


You could use CRDT for more complex games which are not turn based.

https://moosecode.nl/blog/how_deltacrdt_can_help_write_distr...


It is turn based and sync is easy, whatever you do unless you are the only one in turn you can be safely ignored. Once move to non-turn based ...


It's the same as with your mobile phone when you lost your wi-fi signal. Everything pauses and everybody has to wait.

Have you played games like HOMAM 1 or 2? You can't do anything when the CPU is playing its players. You can watch where he goes and what he does but that's it. When he is finished then you go.

When there is a network error - some message or please wait... or loading spinner message should be shown in the meantime.

For turn based RPG games or Chess etc. this is a non issue.

Of course, real time action games etc. is not a good idea for this technology.


Your answer is pretty close to what most people do: https://en.m.wikipedia.org/wiki/Client-side_prediction


There's no need for client side prediction or optimistic UI on (most) Live View projects.

It's all done on the server.


Latency is the reason. Even in a turn based game it still feels really bad to make a move and have to wait for it to make its way through the round trip before seeing the result. In a game with strict ordering like Tic Tac Toe there is little reason not to show the chosen move immediately.


Sure, that's why I meant most use cases.

I mean, 100 ms between a click and a cross appearing on screen is not great user experience, but it's not even the worst. If you're writing a game, a little client side prediction is a good idea.

But if you have a form with instant validation, or any old regular UI, that is not necessary at all. The only built in optimistic UI functionality on Live View is disabling a button when you press it and wait for the server to respond, to avoid double submissions.


> But if you have a form with instant validation, or any old regular UI, that is not necessary at all.

Arguably because you're trusting the client and essentially the built or built-in behavior is therefore optimistic by default. Then hopefully validating on submission server-side.


From the tech talks I vaguely recall, LiveView folks seem to disregard latency, which is where the entire model falls apart for me because the moment you need more control on the client over what to do when the server is not responding - you’re entirely out of luck.

Though maybe I’m wrong and there has been some new developments to address this, I wasn’t following too closely.


On the contrary, LiveView documentation acknowledges this and suggests to handle such scenarios using client side tools:

There are also use cases which are a bad fit for LiveView: Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar.

https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#m...


Sorry, that’s not acknowledging that latency can become an issue, that’s acknowledging that using server-side rendering for things that don’t require a server isn’t the best of ideas (shocker, I know).


You would have latency in all apps that require a server round-trip regardless of the stack used.

When you need to go to the server, you go to the server. There's no other way around it.

I would be curious to hear how you solve this in other stacks? SPAs, whatever, when they need something from the server, they reach for the server.


imagine an SPA for a basic CRUD system. there's a list view and details view with a delete button that returns you to the list.

in liveview server renders me the list view, i click details, server renders me details view, i click delete button, server renders me the list view.

if there's big latency/connection error/etc between clicking delete and getting back the rendered list - user just has to wait.

in spa i could optimistically assume that delete worked and render the list that i already have cached without the deleted item, allowing user to continue working immediately and if there was a disconnect/error - i could retry that delete in the background without bothering user, only prompting them after some number of retries.

don't see how could i implement this workflow in liveview.


You can do that in LiveView just as easily. Remove the item client side, then pushEvent to the server to handle the deletion. In case of any errors, notify the user, refresh the state .etc

pushEvent, pushEventTo (from client to server) [0]

push_event (from server to client) [1]

[0] https://hexdocs.pm/phoenix_live_view/js-interop.html#client-...

[1] https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#p...


> Remove the item client side

so, "just use JS"?

> In case of any errors, notify the user, refresh the state .etc

so, "just use JS"?

every time you say "just use JS" you're diminishing the usecase of liveview because if i need so much js logic - why do i need to also use liveview if i can just use a framework/environment where i can share codebase between client and server seamlessly and have full control.


You stated don't see how could i implement this workflow in liveview.. I've presented you a way.

In case of any errors, notify the user, refresh the state .etc => This would all be done server-side and the client side would simply react automatically. The client side code in this case would have been minimal.

I don't think I'm diminishing anything. For quite a few years I was neck deep in React/Vue world. Now that I'm actively using LiveView, I can properly compare the differences between both approaches, cons and pros. For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).


You’ve presented a workaround and a hack tbh, not something natively supported because the workflow doesn’t map to liveview model, which is fine but you have to be honest with yourself and acknowledge when stuff like that happens, otherwise you’re in for lots of fun down the line.

> For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).

This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.


I wouldn't say it's a hack, but I would agree it's not the standard way to do things in the LiveView world, exactly because latency it's an overstated or misunderstood issue. But if you want to do more, LiveView gives you the tools.

> This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.

I still maintain some React/Vue apps and work with on a daily basis, so it's not a distant memory.

I like choosing the right tool for the right job. For example, I would still choose ReactNative over 2 different code bases for a mobile app for a small team that needs to move fast. For the 5% of cases when that wouldn't do you would need to go native. I see the situation similarly with LiveView. It's hard to beat its productivity & power in 95% of use cases.


I did a deep dive into liveview and this was my take away.

Its nice tech, but once you start introducing JS again to improve UX you really start asking yourself why you didn't just build it with react in the first place.


You could just delete the row with alpine or do a 3 line JS hook if you wanted to, it's quick and easy. That sounds a strange workflow though, it's generally better to make users wait for deletion.


isn't it funny that when you're trying to praise tech you like, all sorts of examples jump into mind, but when you try criticizing something you like - all that imagination vanishes and all existing examples can be dismissed as strange :)


I find it strange because that's not a behaviour I would use, but to each their own, it's the beauty of the web :).

If you really want to do it, you can add 3 lines on your project and that will work with any CRUD page you're building, I don't think that's unreasonable or difficult to do.

Edit:

Actually thinking about it, if you just made a form for that delete button and a phx-disable-with="" on the row it would probably work straight away without any JS hook.


surely you recognize that there is a gap in functionality between liveview and fully fledged frameworks that provide more granular control over ui interactions?


Like which other frameworks? You can code that example feature you pointed out quicker than in React if you want to. You have all the control you want in LiveView.

If you want a delete button per row, no code is needed and the phx-disable-with will work out of the box, if you want a global delete button on the top which deletes multiple rows front-end first before acknowledgement (with checkboxes + delete like in Gmail), 5 lines of JS maximum in a hook and you're set.


> You have all the control you want in LiveView

that you can't even acknowledge that there is a gap in functionality between liveview, a fairly opinionated framework for server side rendering and fully fledged client-side frameworks tells me this is not going to be a productive conversation, so i'm out, bye


Have you even used LiveView? It's not opinionated in any way, you can do whatever you want with it. It gives you extra features to remotely change pages but if you don't like having them you don't have to use them at all and can plug your favourite JS framework if you want to (or you can just use it for parts of the apps and not the rest if you want to).

I've worked for years with React and Angular and I don't really miss anything with a LiveView-based stack. LiveView features gives you 90% of what you want out of the box and for the rest it's fine having a bit of JS here and there to ensure a good experience.


What exactly are you thinking in terms of latency becoming an issue under LiveView but not on normal requests?

Do you mean when websites just need a full refresh because they lost their requests on some callback and no-one implemented recovery across the 5 levels of callbacks or something more specific?


Well with Liveview you go "full server state" for everything that you would normally just use plain JS for. For instance, toggling a checkbox or collapsing a div.

Having latency on such lowlevel interactions might make the UI feel sluggish as a whole.


Yeah certainly, but I'm not sure people are using it that way?

Most examples are to show cool stuff you can do, they're not production vetted. Like most JS examples out there don't really mean that people should be publishing live credentials with their bundles.

I imagine in most cases one would leave everything that is not behind a logged in status as normal routes/pages (signin, landing, contacts, etc). Or if not that those would be things requiring a socket/real-time interface anyway.

For the interactions, I don't think you even need to use alpine.js. Plain setup on DOMContentLoad, CSS Dropdowns/collapsibles that are replaced on JS load, proxying LiveView DOM/Morphdom events (if needed) so other components (even vue,react, etc) can listen to them, and CSS animations.

  import { setup_live } from "./phx/setup_live.js";
  import { setup_dropdowns } from "./interactivity/dropdowns.js";
  import { setup_collapsibles } from "./interactivity/collapsibles.js";
  import { setup_links } from "./interactivity/links.js";
  import { setup_inputs } from "./interactivity/inputs.js";

  function setup() {
      setup_live();
      setup_links();
      setup_dropdowns();
      setup_collapsibles();
      setup_inputs();
  }

  document.addEventListener("DOMContentLoaded", setup);


I went as far as having `onclick` handles and global window functions. Complete heresy. Yes, it's not 100% JS free, but it's pretty much low overhead.

Then LiveView is mostly for your admin dashboards and logged in users views, where it makes it pretty easy to do real-time feedback type of interactions/views and spa like navigation. Since you have proper auth and identification on the user, you can just log them off, rate-limit, block an account, and close their socket if needed.


Have you figured out any good way to have per-page javascript where the JavaScript is only sent over the wire for that pages?


From the top of my head not really, but this would depend on a few things:

- Is it a vendor lib ?

- It's not but is some particular file that is big enough to not make sense including on the root layout?

- It's neither, but functionality that can trigger multiple times and should be only once and only on those pages because it can conflict? or some variation of that?

I think they're all solvable but what makes sense will depend on those, but also on how you're using LiveView (is it LiveView only for logged-in users/some auth, can you set those on the live_view layout...)

But in some cases this is a problem also in spa's, where you have to use a snippet to check if the lib has been loaded, if not add a script tag to the body, or load it through js, etc...


You don't have to. You can totally use JS for these in Liveview



Possibly many LiveView tech demos / projects by the community haven't had much thought into latency, but LiveView itself even contains a latency simulator[0] built-in. Additionally, it can toggle classes on elements when you click them and turn them off again when an acknowledgment has been received from the backend [1]. Finally you have the JS hooks, through which you can just implement any kind of loading indication you want on the frontend. So the tools are there, they just need to be used.

[0] https://hexdocs.pm/phoenix_live_view/js-interop.html#simulat...

[1] https://hexdocs.pm/phoenix_live_view/js-interop.html#loading...


One trick I remember using (~two years ago, so early LV) when handling click events was to put everything async/not needed to reply in a spawn() function.

But yes as soon as you're on the internet you'll often feel the delay if your app is interactive.

The problem is that it's a bit random, because the network and the VM performances are never totally linear.

I remember implementing a countdown (using 1s send_after()) that would work fine most of the time, but sometimes there would be some hiccup and the countdown would stall just a bit and then process the counter in an accelerated fashion, which was terrible from a UI point of view, so in the end I did it in JS except for the update once the end reached.


Minerva Project | Full-Stack Engineer and Product Manager roles | Remote or San Francisco, CA | Full-Time | https://www.minervaproject.com/jobs/

Come help build the Minerva Forum: an engaging live educational platform that pushes the boundaries of WebRTC and realtime web technology. You can see a video of Forum at work: https://www.youtube.com/watch?v=Fz9UV4eXbJ8 or read more about the approach: https://www.minervaproject.com/solutions/forum-learning-envi...

Work on challenging technology problems with a small, sharp, high-EQ team. Our product team is about 20 folks across the US, Spain, Norway, and Israel. Our tech stack is mostly Python/Django/DRF, React/Redux, and Backbone/Marionette. Our real time collaboration services are a combination of websockets, ShareDB, and WebRTC. We deploy to AWS with Terraform and Kubernetes (EKS).

You can read about our team and values here: https://www.keyvalues.com/minerva


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: