> Let's skip over "low-latency", because virtually all programming languages and VMs are "low-latency".
I believe the latency claims here are about running programs with multiple threads of execution. The Erlang VM (which Elixir runs on) uses preemptive scheduling and (I'm forgetting the exact number atm) each process (which is a userspace process) gets to run for a set period of time (I'm simplifying here) before the VM preempts it and lets another process run. If you have a rogue thread in something like Node, that thread can eat up all your CPU time. In Elixir/Erlang, it can't.
> I'm not sure what "distributed" means here? Any app in most languages can be coded to be "distributed" both on the scale of within a single machine and at the scale of deploying multiple small instances of it. (eg Kubernetes, running it on multiple EC2s etc etc)
The BEAM has a bunch of the "distributed" bits baked in that you then (mostly) don't have to think about when writing your program. If you have two machines, A and B, your program can run on those machines as if it was conceptually a single machine. This is somewhat different from most "distributed" applications.
In general, I find a lot of the patterns around fault tolerance and concurrency to be conceptually easier to work with than languages like Java, C++, etc.
For instance, in Elixir, you might structure a connection to your database as a supervisor and a pool of 10 connections. If something happens to corrupt one of those connections, the process will simply die, the supervisor will see that the process died and spin up a new one to replace it and then re-run any pending queries the dead process was in charge of. You can still handle known exceptions in the process, etc, but the idea is to quickly get back to a good state. The runtime was originally built for managing phone connections so this design choice makes sense.
In terms of concurrency, I find the actor model (which other languages use too!) simpler to reason about vs a process forking model like in Ruby. Essentially, similar to Go, you get parallelism by composing concurrent functions. Unlike Go, however, the composition is at a module level rather than within individual functions. So each function in Elixir/Erlang runs in series, but you can compose modules together to run concurrently and pass messages back and forth.
In general, I would think about Elixir as a Ruby/Node/Python competitor rather than a Java/Rust/C++ competitor. You can very quickly build scalable web applications without a ton of code. My pitch for Phoenix is that it's essentially Rails but built 10 years later; so it has 10 years of learnings (+ solid concurrency support) but doesn't have 10 years of maintenance baggage.
I wrote this very quickly so it's possible some bits aren't clear/precise/etc and I would be happy to elaborate/clarify on some points later when I have time. But hopefully this was helpful. I would also endorse many of the links others have posted if you are interested in more details! Especially the YouTube talk and the Armstrong paper.
IgG4 is the class of immunoglobulin responsible for shifting immune response upon encountering some antigen from attack to tolerance, which is good for false positive pathogens like flower pollen that cause seasonal allergies. But that's of course not the type of response one would want from their immune system when encountering SARS-COV-2, since if the viral particles are being tolerated as opposed to being cleared by one's immune system, it leaves the viruses uninhibited to multiply and continuously damage your body.
IgG4 response is how your body would typically treat things like allergens that are basically harmless and don't need a full immune response. Covid needs a full immune response, and an IgG4 response suppresses that. You don't want your body to treat a virus the same way it treats pollen.
The others did a good job of the technical details. The higher level implication is that taking too many of the shots will appear to make you feel better in the short term but create long term internal damage, as the immune system won't fight the virus as effectively/at all and instead will let it get on with replicating.
So it seems the shots convert short term but temporary unpleasantness into long term serious problems - at which point, of course, they will be classed as not vaccine related because they didn't happen immediately.
As such the social implications of this discovery are more of the same. The population will continue to be split into camps that think all vaccines are perfect and reject any link with bad outcomes as not proven, denied by public health so it must be false. Public health bodies will continue refusing to break down incidence data by vaccine status, or will do so in fudged ways by redefining what "vaccinated" means. Other people will observe long term disparate outcomes between people who had lots of shots and others who had none, but if they try to speak about what they see they'll be shut down, told it's just anecdotes and not data and maybe fired. Polarization will continue to spiral.
Overall: this finding is bad, and the long term implications are bad.
It's pretty solid. You may have to roll something yourself for really esoteric APIs, but pretty much everything you probably want to use is supported already
I think reading about (and looking at the code) for things you use and trying to understand how they work under the hood has been super useful: http://aosabook.org/en/index.html
Having smart people around to learn from is extremely helpful too.
Happy to chat more if you'd like. Just drop me a line: connor[at]opendoor[dot]com
I can't speak for the data science stack, but everything in the web stack is Postgresql (with the exception of Redis for background jobs and things of that nature).
My understanding is that this was in an Uber Black. He's using the nicest version of his product and the one that pays out the most to the drivers. Sounds about like what I would expect him to do.
Why not say your idea in a line is: "We are trying to bring financial education to the masses in an interactive fun way" instead of the "Uber for X" style idiom? Its easier to understand for the lay person that way too.
Hmm intersting point! The reason we did that was sometime back YC released a list of its graduating startups and each company was described as something like "Uber of dog walking" and that is the reason we stuck with it! Point noted though!
> I'm not sure what "distributed" means here? Any app in most languages can be coded to be "distributed" both on the scale of within a single machine and at the scale of deploying multiple small instances of it. (eg Kubernetes, running it on multiple EC2s etc etc) The BEAM has a bunch of the "distributed" bits baked in that you then (mostly) don't have to think about when writing your program. If you have two machines, A and B, your program can run on those machines as if it was conceptually a single machine. This is somewhat different from most "distributed" applications.
In general, I find a lot of the patterns around fault tolerance and concurrency to be conceptually easier to work with than languages like Java, C++, etc.
For instance, in Elixir, you might structure a connection to your database as a supervisor and a pool of 10 connections. If something happens to corrupt one of those connections, the process will simply die, the supervisor will see that the process died and spin up a new one to replace it and then re-run any pending queries the dead process was in charge of. You can still handle known exceptions in the process, etc, but the idea is to quickly get back to a good state. The runtime was originally built for managing phone connections so this design choice makes sense.
In terms of concurrency, I find the actor model (which other languages use too!) simpler to reason about vs a process forking model like in Ruby. Essentially, similar to Go, you get parallelism by composing concurrent functions. Unlike Go, however, the composition is at a module level rather than within individual functions. So each function in Elixir/Erlang runs in series, but you can compose modules together to run concurrently and pass messages back and forth.
In general, I would think about Elixir as a Ruby/Node/Python competitor rather than a Java/Rust/C++ competitor. You can very quickly build scalable web applications without a ton of code. My pitch for Phoenix is that it's essentially Rails but built 10 years later; so it has 10 years of learnings (+ solid concurrency support) but doesn't have 10 years of maintenance baggage.
I wrote this very quickly so it's possible some bits aren't clear/precise/etc and I would be happy to elaborate/clarify on some points later when I have time. But hopefully this was helpful. I would also endorse many of the links others have posted if you are interested in more details! Especially the YouTube talk and the Armstrong paper.