> Let's skip over "low-latency", because virtually all programming languages and VMs are "low-latency".
I believe the latency claims here are about running programs with multiple threads of execution. The Erlang VM (which Elixir runs on) uses preemptive scheduling and (I'm forgetting the exact number atm) each process (which is a userspace process) gets to run for a set period of time (I'm simplifying here) before the VM preempts it and lets another process run. If you have a rogue thread in something like Node, that thread can eat up all your CPU time. In Elixir/Erlang, it can't.
> I'm not sure what "distributed" means here? Any app in most languages can be coded to be "distributed" both on the scale of within a single machine and at the scale of deploying multiple small instances of it. (eg Kubernetes, running it on multiple EC2s etc etc)
The BEAM has a bunch of the "distributed" bits baked in that you then (mostly) don't have to think about when writing your program. If you have two machines, A and B, your program can run on those machines as if it was conceptually a single machine. This is somewhat different from most "distributed" applications.
In general, I find a lot of the patterns around fault tolerance and concurrency to be conceptually easier to work with than languages like Java, C++, etc.
For instance, in Elixir, you might structure a connection to your database as a supervisor and a pool of 10 connections. If something happens to corrupt one of those connections, the process will simply die, the supervisor will see that the process died and spin up a new one to replace it and then re-run any pending queries the dead process was in charge of. You can still handle known exceptions in the process, etc, but the idea is to quickly get back to a good state. The runtime was originally built for managing phone connections so this design choice makes sense.
In terms of concurrency, I find the actor model (which other languages use too!) simpler to reason about vs a process forking model like in Ruby. Essentially, similar to Go, you get parallelism by composing concurrent functions. Unlike Go, however, the composition is at a module level rather than within individual functions. So each function in Elixir/Erlang runs in series, but you can compose modules together to run concurrently and pass messages back and forth.
In general, I would think about Elixir as a Ruby/Node/Python competitor rather than a Java/Rust/C++ competitor. You can very quickly build scalable web applications without a ton of code. My pitch for Phoenix is that it's essentially Rails but built 10 years later; so it has 10 years of learnings (+ solid concurrency support) but doesn't have 10 years of maintenance baggage.
I wrote this very quickly so it's possible some bits aren't clear/precise/etc and I would be happy to elaborate/clarify on some points later when I have time. But hopefully this was helpful. I would also endorse many of the links others have posted if you are interested in more details! Especially the YouTube talk and the Armstrong paper.
> I'm not sure what "distributed" means here? Any app in most languages can be coded to be "distributed" both on the scale of within a single machine and at the scale of deploying multiple small instances of it. (eg Kubernetes, running it on multiple EC2s etc etc) The BEAM has a bunch of the "distributed" bits baked in that you then (mostly) don't have to think about when writing your program. If you have two machines, A and B, your program can run on those machines as if it was conceptually a single machine. This is somewhat different from most "distributed" applications.
In general, I find a lot of the patterns around fault tolerance and concurrency to be conceptually easier to work with than languages like Java, C++, etc.
For instance, in Elixir, you might structure a connection to your database as a supervisor and a pool of 10 connections. If something happens to corrupt one of those connections, the process will simply die, the supervisor will see that the process died and spin up a new one to replace it and then re-run any pending queries the dead process was in charge of. You can still handle known exceptions in the process, etc, but the idea is to quickly get back to a good state. The runtime was originally built for managing phone connections so this design choice makes sense.
In terms of concurrency, I find the actor model (which other languages use too!) simpler to reason about vs a process forking model like in Ruby. Essentially, similar to Go, you get parallelism by composing concurrent functions. Unlike Go, however, the composition is at a module level rather than within individual functions. So each function in Elixir/Erlang runs in series, but you can compose modules together to run concurrently and pass messages back and forth.
In general, I would think about Elixir as a Ruby/Node/Python competitor rather than a Java/Rust/C++ competitor. You can very quickly build scalable web applications without a ton of code. My pitch for Phoenix is that it's essentially Rails but built 10 years later; so it has 10 years of learnings (+ solid concurrency support) but doesn't have 10 years of maintenance baggage.
I wrote this very quickly so it's possible some bits aren't clear/precise/etc and I would be happy to elaborate/clarify on some points later when I have time. But hopefully this was helpful. I would also endorse many of the links others have posted if you are interested in more details! Especially the YouTube talk and the Armstrong paper.