Hacker News new | past | comments | ask | show | jobs | submit login
Elixir for cynical curmudgeons (alopex.li)
470 points by todsacerdoti on Aug 3, 2023 | hide | past | favorite | 226 comments



What a terrific write up! I highly recommend the read.

I too had an ephiphany when I realized Elixir was actually a Lisp. The language went from mildly interesting to "holy crap I think this could be my favorite language." Add on some wildly productive and pragmatic frameworks like Phoenix, Nerves, and increasingly Nx, and you've got a hell of a toolkit.

My biggest criticism of Elixir has been the difficulty of writing quick scripts, but that story has improved greatly over the last couple of years. I still turn to Ruby for scripts (especially since it's often present on systems and so easy to install), but this is a good reminder that it's time to try Elixir again for that. In some ways it may be better, such as dealing with dependencies. For exmaple I try hard to avoid using any non-standard libs with Ruby scripts so I can just use the system ruby, and don't have to ship a Gemfile and/or install any gems, or mess with chruby/asdf. With Elixir that story looks even better since the deps can be part of the script itself.


> I too had an ephiphany when I realized Elixir was actually a Lisp.

Is it actually? Last time I looked at Elixir (granted, that was a while ago), the syntax for writing macros was different than the syntax for writing ordinary code, so it fails even at a simple thing like that, it doesn't seem to support homogeneous meta-programming at all.

But again, I feel like I might be wrong here and I'm happy to be proven wrong.


Ya, it's not actually a lisp. I can't find the source but José has stated part of the idea was to give you lisp-like metaprogramming with a non-lisp syntax (sorry, José, if I have mis-quoted).

Even the regular syntax is sort of lispish, though, if you remove the syntactic sugar. It's all macros and datastructres.

    defmodule Foo do
      def bar do
        "baz"
      end
    end
is really:

    defmodule Foo, do: (def bar, do: "baz")
...which is lispish if you squint the right way.


A bit pedantic I guess, but that still uses some syntactic sugar (optional brackets and keyword syntax). Removing all syntactic sugar would look like this:

    defmodule(Foo, [{:do, def(:bar, [{:do, "baz"}])}])
Even aliases (like Foo) are also a kind of syntactic sugar for a specific type of atom (Foo is actually the atom :"Elixir.Foo")


Ha, ya I meant remove the `do` sugar.


That’s just syntax sugar. Even the original Lisp was supposed to have sugar on top of S-expressions, although that never really happened.

https://en.m.wikipedia.org/wiki/M-expression


I think Mathematica has had a slight variant since 1988 (i've been using it since then, but never M-expressions, but they look awfully similar).


It's listed under the "Implementations" section of that wiki article.


You can write macros (as well as functions, actually) that accept AST and emit transformed AST back. It has quote and unquote. Turns out that it doesn't need homoiconicity to accomplish Lisp-level macro capability, which means it has all the power of a Lisp but with actual syntax (which is, like it or not, more appealing to many people) in a language with deep pattern-matching and completely immutable data guarantees.


What is the actual failure? The macros are written with and by processing the built-in data structures.


> For exmaple I try hard to avoid using any non-standard libs with Ruby scripts so I can just use the system ruby, and don't have to ship a Gemfile and/or install any gems

The ruby "story" for Gemfile dependencies for single-file scripts have gotten a BIT better since possibly last time you looked at it.

It will of course necessarily involve installing gems still though. Ideally for your use case, I feel like you'd want to install them to a local (not system-default) location ... which is ordinarily possible with bundler, but oddly the maintainers seem to think it's an anti-use-case for inline bundler? https://github.com/rubygems/bundler/issues/7131. I dunno.

Anyway, I put this here not to convince you, your decision is reasonable, but just other info for other readers.


Problem with writing scripts in Erlang VM is the slow startup time...it isn't suited for that. Although I love it for pretty much everything else besides scripts and executables.


Also it’s not the best for creating OS processes and/or interacting with stdio.


Yeah, this miffs me too (a half second startup time, locally), which has got me even looking at languages like Idris that can compile down to C and then thus to an instant-start app.


You can definitely get startup mich faster than half a second if you tweak the start script and maybe start only one scheduler, etc.

I don't think it will be great without heroics, but a little work and I think you can get it down to < 100ms.


It needs a Babashka equivalent.


For me Elixir's Livebook has become a killer "scripting" tool.


Here's a great repo showcasing how to use it more like a quick scripting language. https://github.com/wojtekmach/mix_install_examples


Re Elixir for scripting, what's the point? Procedural Ruby, Perl, Python or bash have been around for decades and are exactly what you need for short sysadmin scripts. Elixir/Erlang's niche is massive lightweight conucurrency and is the wrong tool for this domain.


The Enumerable APIs, regex handling, low-bullshit functional approach, and great REPL make it a pretty nice scripting language. I've done quite a few Advent of Code puzzles with it, and it always feels like such a nice multitool for chewing up the input files into workable formats.

It's not the first tool I'd reach for, I'm still a filthy bash hacker at heart, but I could definitely see using it for the right problem.

It's also just fun to write, which is valuable in its own way.


What you've described is functional-style/dry Ruby.


Well, no, what I'm describing is Elixir.

If your point is that the potential use cases overlap too much with Ruby, I mean, fine, write Ruby. I like Elixir more. I'm not picking either language for pure performance, but rather for ergonomic purposes, so we're into kinda subjective territory.


I often write some little toolets in Elixir, just to hone skills and give myself an opportunity to explore the libray. Often after I e first written it in Python. I agree it feels a little misaligned, but it’s nice to be able to use it for other reasons.


It's nice not to have to switch languages. I use multiple languages but we're not infallible; it would be better if we could write it all in one perfect language that we know really well. Just explaining the desire, I've never touched elixir.


> My biggest criticism of Elixir has been the difficulty of writing quick scripts

Curious what you find to be the biggest pain point(s) here. Is it runtime containment? Library access? Etc.


Is the size of the erlang vm one such point? I think I remember reading it's about 500mb, and you won't find it installed by default in many systems.


Startup time is another. For example:

    $ elixir test.exs
    hello, world!

    $ hyperfine "elixir test.exs"
    Benchmark 1: elixir test.exs
      Time (mean ± σ):     471.1 ms ±   2.6 ms    [User: 187.4 ms, System: 83.1 ms]
      Range (min … max):   467.1 ms … 475.6 ms    10 runs
Compared to:

    $ ruby test.rb
    hello, world!

    $ hyperfine "ruby test.rb"
    Benchmark 1: ruby test.rb
      Time (mean ± σ):      36.3 ms ±   2.6 ms    [User: 26.0 ms, System: 8.4 ms]
      Range (min … max):    34.5 ms …  53.6 ms    53 runs
In practice, I've been able to get the ruby startup time to as low as 6ms - this is what I use for scripts running as keyboard macros, for example. I cannot practically use elixir for that.


Using escript, I am able to almost halve the startup time. But it's still ~4x slower than ruby.

  hyperfine "./derp"
    Benchmark 1: ./derp
      Time (mean ± σ):     241.0 ms ±   7.4 ms    [User: 146.9 ms, System: 170.8 ms]
      Range (min … max):   235.1 ms … 261.7 ms    11 runs


Huh, interesting. I bet for keyboard macros, it would work better by invoking a call to something that is already running to reduce startup time. Kinda like the emacs server.

BEAM should still be able to compile new code on the fly, for one-off scripts


> keyboard macros [...] would work better by invoking a call to something that is already running to reduce startup time

Maybe, but that comes with two disadvantages:

- I would have to re-architect my scripts to work like that, and run a server with uncertain memory costs perpetually; and - I would have to use something with faster start time to run the call on my keyboard macros, e.g. bash, which would mean writing bash instead of elixir/ruby and/or being clever.


Not even close. A Nerves (iOT) package with Linux Kernel and Erlang/OTP VM and our App code comes in at 21Mb. See: https://nerves-project.org/


Most of my phoenix apps sit under or around 100mb resident, and they load a fair bout more than a script probably would.


"This is the root of my curmudgeoniness: Magic is never worth it. 99% of the time, Magic breaks. And when it does, and someone inevitably asks me to help fix it, I do it by getting out the machete and chopping out all the happy shiny convenient shit until you can see what is actually going on. And when you dig through the Magic and try to figure out what the hell is actually going on, most of the time it’s either stupid and broken by design, or it’s not very magical at all and you’re just like “why didn’t you just say this in the first place?”

"Over and over again, it turns out that there is no such thing as Magic, and anyone who says otherwise is skimming over details that will bite you in the ass later on. This is fine for many purposes, because 90% of programs are small hacks and one-off tools and so “later” very seldom actually happens."

Oh, yeah. This. So much this. A dump-truck load of this.

Even better when everybody is using different Magic, or the guy who set up this Magic left two years ago and everything has been done my cargo-cult since.


Things I like about Elixir:

- concurrency is intuitive!

- Elixir is a two-for-one language. You can reach into Erlang std library as easily as you can that of Elixir. So, depending on your work, you get a chance to discover a wide range of goodies that come for "free". Examples include gen_tcp, gb_tree, etc.

- You can remote into a running virtual machine in production, pod or whatever, and have access to a repl that let's you inspect state, manage "processes", and manually invoke commands to troubleshoot issues. The observer, and tui counterpart, are quite useful too.

- everyone in the community writing libraries uses the standard approach to emitting telemetry, saving time and effort for users to take what is available

- fast compile times

- hardest working BDFL (although this presents risks, too), who also has a pleasant, respectful disposition

- the community is respectful

Things I don't like:

- dynamic language leads to pattern matching or type mismatch errors at runtime that could have been caught at compile time

- Elixir Phoenix doesn't have a large community, nor much up to date reference material, or literature. If you're not experienced with Front-end dev, it's easier to onboard with a modern SPA framework like React.

- supply of Elixir talent exceeds demand, although this isn't unique to Elixir (e.g Rust)

- slowly growing ecosystem, relatively speaking


What the documentation for Elixir and its big libraries (Ecto and Phoenix) lack in quantity of documentation, they exceedingly make up for in quality of documentation. Who needs a bunch of garbage blog posts about how to do xyz, when the official docs painstakingly lay out not only what you need to know, but also present it in a readable and discoverable way?


In my experience, the documentation is great for most of the major libraries and the common path.

Once you get away from those things get sparse. Several really common libraries have glaring omissions (or entirely missing) module docs, and others just barely cover the common web use case.

It’s not uncommon for me to have to open the source code to try and understand what some function actually does.

Elixir is still my language of choice and has a better docs ecosystem than most languages but the community demonstrates a bit of a blind spot also.


I generally agree, but to play devil's advocate: some people learn better by example, and having thousands of (admittedly low quality) blog posts out there makes it more likely that someone will have done something very similar to what you're trying to do.


> type mismatch errors at runtime that could have been caught at compile time

It's still early, but there are plans to introduce a type system for Elixir:

https://elixir-lang.org/blog/2023/06/22/type-system-updates-...


I'm eagerly looking forward to and hoping the research project to add `Set Theoretic Types` to the language will work. It will help replace tooling like dialyzer in many cases (which many people love/hate).

It will also have benefits for LSP feedback. It could also lead to more information being passed to the new BeamAsm JIT compiler for more compile time optimizations and faster execution.


If you're a fan of the ecosystem, but not of dynamic types, there are statically typed languages on BEAM, eg Gleam (https://gleam.run/)


The changing code in production with a REPL reminds me of Pharo. Though, with Pharo you get a whole IDE :D

I also feel that Pharo has too small if a community. The ESUG conference is coming up soon though. It’s good to see they’re coming together

Why is concurrency intuitive?


There's no function color. The concurrency patterns are well defined, documented and discussed. You don't need to sit in a chat room asking for help from a benevolent core team of maintainers.

Granted, I did learn the hard way about introducing cycles but the error messaging lead me to the root of the problem quickly.


> supply of Elixir talent exceeds demand, although this isn't unique to Elixir (e.g Rust)

Is this true?


At my current company we’ve many times had to hire and train people who had no elixir experience. If anything I’d say the demand for experienced people exceeds supply.


As a counter point, nearly all the good senior engineer with experience in elixir i know have all accepted non elixir jobs in the past few years and are still in them.

Every time they go on the market for an elixir one, they cannot find good jobs and the few good one reject them.

As I regularly point out, i have hired whole elixir teams in 2 months multiple time in the past. If you cannot find us, the problem is that your hiring process and work practices are pushing us out. Not that we do not exist.


Talent and experience are not the same thing, and I would say experience is a lot more correlated to demand of talent than to supply of it. More demand means more opportunities to get experience.


No, I am lying. The most jobs out there are for elixir engineers and managers are struggling to fill vacancies.


Why be so unnecessarily subversive?


Elixir, other than the language itself, has wonderful developer tooling that's all written and configured in the language itself. It makes me feel really comfortable because it's all right there. You don't need external scripting languages. For some programming, you don't even need external libraries sometimes. It all feels very OS like, not to mention the OS-like nature of the BEAM.

If you're on the fence on Erlang or Elixir, watch The Soul of Erlang and Elixir by Sasa Juric: https://youtu.be/JvBT4XBdoUE

If it doesn't make you excited about a language, I don't know what will. It is an astounding feat that the Erlang team at Ericsson were so ahead of the curve and remain so today.


I absolutely love Erlang. It’s a succinct, powerful, amazingly well-thought-out system design with a VM that is very complementary.

I just can’t get past all the extra verbosity of Elixir, but this writeup gives me hope that someday I will.


Erlang does have that nuts and bolts simplicity going for it, for sure. If it allowed rebinding on variable names and had pipelines, it would be very attractive to me. Elixir comes with a Ruby-esque syntax which I don't personally prefer, but I have gotten used to it. The built-in formatter helps out greatly there.

I just go all in on Elixir module design just like I would in an ML dialect. So structs, module types, custom types, typespecs, docs, Credo turned up to 11, run Dialyzer, etc. So I embrace some of the verbosity. Some don't prefer the style, but those are all features already there, and for me, it helps bridge the gap for Elixir being dynamically typed. Although there are developments coming to give it a static type system.


> `if` is a macro.

> `def` is a macro.

> `defmodule` is a macro.

> It’s all macros. It’s ALL MACROS! ALL THE WAY DOWN! AAAAAAAAAA AHAHAHAHAHHAHAAAA!

Also see the definition for `defmacro`, the macro for defining macros, which is defined using... defmacro

https://github.com/elixir-lang/elixir/blob/v1.15/lib/elixir/...


Always glad to see another person finding their way to Elixir!

IMO the biggest tripping point for people (including, clearly, this author) is the freedom. They said that the "Big complicated project structure" was a stumbling point - but there is no structure! You can actually do it any way you want. But there are conventions. Mix suggests that you lay our your project in a certain way (and most Elixir projects are laid out that way) - but you are not required to by the language.

Nearly everything in the language is a convention (including the test / dev / prod run modes). This applies to Erlang as well. It has standard approaches, but you don't /have/ to follow them. I frequently get confused about the way libraries tend to approach something (do you have to do it this way? Is there a requirement?) and the answer is generally "no, but the community tends to think this is the best way to do it and so people generally do it this way."

A great example of everything being a convention: structures aren't a language feature. They're maps with special keys and Elixir library support. The upside and downside to this is that you could actually do almost anything with them - if you have the time and the skill to implement the system. MANY things are like this when you get into it.


If you want to get a mindbending experience of the articles "it's macros all the way down" then here is one for you in glorious 3 minutes and 39 seconds: https://www.youtube.com/watch?v=x5W3CmZJMLs


That's pretty cool!


I'm also looking at Elixir and Phoenix (coming from Python/C++), and it looks cool. Doing realtime features with Python requires something like Tornado / asyncio, which isn't ideal IMO.

I'm all for the immutable / message-passing architecture in the large, but I wish that you could just write Python-style code within processes. The VM copies all data structures at process boundaries anyway.

I think that language would be very popular!

I wrote Lisp 20 years ago, before Python, but now it feels a little annoying to "invert" all my code, for what seems like no benefit

e.g. from https://learnxinyminutes.com/docs/elixir/

    defmodule Recursion do
      def sum_list([head | tail], acc) do
        sum_list(tail, acc + head)
      end

      def sum_list([], acc) do
       acc
      end
    end
I would rather just write

   def sum_list(L):
     result = 0
     for item in L:
       result += item
     return result
But then also have the Erlang VM to schedule processes and pass immutable messages.

There is obviously a builtin in Python that does this, and I'm sure there is a better way to do this in Elixir. But I think the general principle does hold.


Have you seen Elixir list comprehensions?

  iex> for n <- 1..4, do: n*n
  [1, 4, 9, 16]
Or for your example:

  iex> l = 1..3 # equivalent to [1,2,3]
  iex> Enum.reduce(l, fn item, acc -> acc + item end)
  # or
  iex> Enum.reduce(l, &(&1+&2)
  # or the built in
  iex> Enum.sum(l)
  # all return:
  6
https://elixir-lang.org/getting-started/comprehensions.html https://hexdocs.pm/elixir/1.12/Enum.html#reduce/2 https://hexdocs.pm/elixir/1.12/Enum.html#sum/1

None of which take away from your point though: it's definitely a different mental model than you use for Python/Go/Javascript.


Yes though Python has list comprehensions too :)

I should have picked a different example -- the idea I was trying to get across was that mutability within a loop is useful.

Here’s a similar question I asked with Python vs. OCaml and Lisp. Short-circuiting loops seems to cause the code to become fairly ugly in functional languages

https://old.reddit.com/r/ProgrammingLanguages/comments/puao1...


You can reduce with list comprehensions in Elixir, but it's uncommon to see:

    sum_list = fn list ->
      for item <- list,
          reduce: 0,
          do: (sum -> sum + item)
    end

    sum_list.([1,2,3])
Other comments are correct that `Enum.reduce/2` is probably better:

    Enum.reduce(list, &(&1 + &2))
Obviously you don't have the flexibility / mutability you refer to in other comments with either of these, but you can always put more in your accumulator:

    get_sum_and_update_count = fn list, map ->
      for item <- list,
          reduce: {0, map},
          do: ({sum, map} -> {sum + item, Map.update(map, item, 1, &(&1 + 1))})
    end
(A contrived function that returns a list sum and an updated map with the count of the values found in the list)


Built ins aside, I find most uses of recursion to be better served by Enum.reduce.


Yeah I knew someone was going to say that (see last sentence), but a for loop is more powerful and general than reduce().

Mutability within the loop is useful, and it can be controlled by processes and message passing.


I assumed you meant something like Enum.sum that completely trivializes the code. In comparison, generic, higher level functions are pretty fundamental to FP.


the problem with a for loop is that it is inexplicit about what from the outer scope can be mutated. Which isn't a problem, for small functions but I have definitely seen monster for loops in professionally written django code with ~500 lines of code in the function.

Besides being in that magical sweet spot (executes imperatively but reads declaratively), a reduce forces you to declare ahead of time what things can be carried over and mutated between each iteration.

> it can be controlled by processes and message passing.

DO NOT DO THIS YOU WILL REGRET THIS SIMCITY MEME


I'm not sure "powerful" is the right word here, it's more a matter of "what certain people are used to". If you want a counter or whatever, you can use a tuple to give yourself multiple accumulators. You could argue it's uglier, but I wouldn't necessarily agree. It's something that is easy to get used to if you remove the "but I'd rather just do it this way" mindset.


You might want to look into Ray. It's a Python library lets you deploy actors (classes) and tasks (functions) and abstracts away the management of resources, message passing, lets you define automatic repairs, etc. that is entailed with deploying an arbitrary amount of intercommunicating processes in Python.

Admittedly, I haven't worked with Erlang or Elixir, so their solution to this might be much more usable. Ray is aimed primarily at ML use-cases, though it is quite general.


Is Enum.reduce not sufficient? Even better, Enum.sum?

https://hexdocs.pm/elixir/1.12/Enum.html#reduce/3


Yep. The only problem with elixir macros is that people look at the macros that are in core++ (exunit, plug, phoenix, ecto) and decide it's a good idea to dsl-up everything.

The biggest offenders IMO are commanded, Tesla, absinthe, exmachina and now most recently ash. Then there's tons of tiny libraries that litter hex.pm with macro-heavy nonsense.

At least the core++ libraries make macros whose syntax/keywords matches something outside elixirland that you'd have to know anyways (e.g. 'get' from plug, all of SQL from ecto), but the big offenders list are making up clever names for things, which adds mental overhead to literally everything you do, building crazy complicated compilation pathways (especially for overeagerly lazy stuff) or writing function names from whole cloth that make refactoring or searching your codebase a nightmare (or more than one). E.g. if you're working in a domain where non-proud camel case is the convention, don't automatically snake-case it.


I personally don't have any problem with heavy macro-use in libraries, I feel like that is where they shine. So long as the documentation is good, I don't really care how the library itself is implemented. I would much rather have the cleaner API in many cases. I know it can be taken too far, though. Some funky stuff can happen in Phoenix's router, though a lot of it is actually to reduce cyclic compile-time dependencies as the router is a place where these necessarily happen.

Macros in business code, however, are pure evil. I feel like anyone who spent any time in Rails over the past decade has learned that lesson.


Yep. I wish `pipeline`, `scope` and `resources` in Phoenix router and also all the `use MyApp.Web, :x`, would just die in a fire.

Thank goodness they got rid of views though I still am annoyed that the default path resolution on templated content isn't explicit.

The crazy thing is that the most amazing part of phoenix, liveview, is almost no macros.


I never use `resources` though have no beef with `pipeline` or `scope`, though I haven't thought too much about it. It is a little weird getting used to which modules are getting auto-qualified and which aren't, but I'm long passed that. And that is actually what is "fixing" the cyclical dependency issue.

What's wrong with the `use MyApp.Web, :x`? You can easily get rid of it as it's just generated boilerplate. It's a nice little pattern to allow all the different things to share the view helpers. If you'd like to be extra explicit, though, just delete the file and include everything manually.

I'm indifferent on views because I never really used them, haha.


I personally would prefer something like:

    defmodule MyApp.Web.Router do
      use MyApp.Router
    end
over

    defmodule MyApp.Web.Router do
      use MyApp, :router
    end
but that’s a preference.


You can do that if you want. Just remove `def router` from `MyAppWeb` move its contents over to a custom module:

    defmodule MyAppWeb.Router do
      defmacro __using__(_) do
        quote do
          use Phoenix.Router, helpers: false

          import Plug.Conn
          import Phoenix.Controller
          import Phoenix.LiveView.Router
        end
      end
    end
But why the dislike for passing a second argument to `use`? It keeps everything together pretty nicely.


Mostly because we’re using an umbrella at the place of work, and we absolutely don’t want those boundaries crossed.


Oh! Well I don't know your setup but you could always add a top level `PhoenixHelpers` namespace (or a better name) that holds these generic types of things that can be shared between apps. Though again I really don't know your set up. Like maybe you have an app whose router doesn't need `Phoenix.Controller` so then this wouldn't make sense.


> I wish `pipeline`, `scope` and `resources` in Phoenix router and also all the `use MyApp.Web, :x`, would just die in a fire.

How would you prefer it worked/looked?

> I still am annoyed that the default path resolution on templated content isn't explicit.

What do you mean by this? In the new 1.7 approach to views, you explicitly set the path in each view using `embed_templates` (if you're not defining your HEex directly within the view module itself.) What part isn't explicit?


> What do you mean by this? In the new 1.7 approach to views, you explicitly set the path

For example, if you wrote a liveview called FooView in /path/to/my_liveview, a default render() will look for /path/to/my_liveview/foo_view.html.heex

(Also note auto-snakecasing)

Btw, this is still a different semantic if you do the deadview equivalent, which will try to do a path resolution in /templates.

I guess these are minor complaints but I would much prefer they use the same default strategy and, say, pass the path in `use Phoenix.Liveview`


> This is still a different semantic if you do the deadview equivalent, which will try to do a path resolution in /templates.

Not true in 1.7 - views* don't have any kind of "default" template resolution; you have to explicitly state the template path with `embed_templates`. That is if you're not just defining your HEEx directly within the view module itself, as I already mentioned.

*By "views" I mean the new style that uses `use MyAppWeb, :html`, not the old style you describe which is now (in 1.7+) only available as the external dependency `phoenix_view`. The new style of module is still called a "view" by the docs: https://hexdocs.pm/phoenix/request_lifecycle.html#from-endpo...


> So long as the documentation is good, I don't really care how the library itself is implemented.

I don't have experience in Elixir, but in my general experience what you say is fine until you need to debug something going wrong (or something you don't understand and _think_ is going wrong) in a library, which always happens eventually, no? Or until you want to PR a feature, etc.


Your intuition as a nom-elixir user is absolutely correct. However it's worth saying that as a working dev, there's almost never a problem in the core++ libraries.

Edit: lol looking at the log in elixir slack, and one of the most recent posts is exactly trying to debug something too magical in one of the aformentioned non-core++ libries!!


Those are fair points. In terms of PRing, though, I do think that people are a little too "afraid" of macros. It sometimes feels like people won't learn them out of of principle which is a shame as they aren't that complicated. Though anyone trying to navigate nested macros definitely has my sympathy.

No matter how slice it, however, macros are a massive part of what makes Elixir Elixir. A large portion of the language's "keywords" are just macros and they reduce a lot of the boilerplate Erlang forces you to write.


EEx does macros well. Note that EEx.function_from_x makes you declare the name of the function it instantiates, so it's searchable.

GenServer is kind of a "bad example" of a macro. It creates a totally hidden function. Though I guess to be fair 99.9% of the time you really shouldn't be writing genservers in elixir.


>It creates a totally hidden function.

Are you talking about child_spec/1? `use GenServer` honestly provides some pretty good QoL enhancements over `@behaviour GenServer`. I don't know when or if there was a change to recommending `use GenServer`, but I was pleasantly surprised when I started using it. Especially because I'm not writing new GenServers all the time, having guard rails is nice to avoid being slowed down by remembering all the boilerplate: init/1, handle_x/y, child_spec/1

It's even nice enough to give additional information about why its crashing so you don't get confused why e.g. the handle_call function is working. I make this mistake almost every time I write a new GenServer:

    def handle_call(:ping, state) do
      {:reply, :pong, state}
    end
Before, you would just get this meaningless gen_server stacktrace that basically just says "it crashed lol". Now you get the additional hint of "hey you didn't define handle_call/3" when it crashes.


Last I checked you need to do init/1 by hand. Honestly being more transparent about creating child_spec like `GenServer.create_default_child_spec` would be great. The rest of use GenServer is pretty good, IMO (this is why I wrote 'kind of a bad' macro). I normally hate default implementation of functions but defaulting with an error saying "you forgot this" is reasonable.

Edit: just checked, it warns if you don't... I have compiler warnings as error on all my personal projects so that's why I thought you had to do it manually


> 99.9% of the time you really shouldn't be writing genservers in elixir

Woah! Why is that?

I've written at least one GenServer in nearly all the Elixir projects I've worked on. They seem like one of the base building-blocks to me. Also, if you squint a bit, you'll see that many libraries you're working with are essentially exposing a GenServer interface, with a few extra features.


The only thing I can think of is when developers overuse genserver not realizing it's involving another process and then serializing your requests (behind a queue even) into its mailbox. That may be desirable for some but in-process handling can sometimes be the better choice.


I've never written a genserver in any professional project I've worked on.... Well, except maybe for a trivial 5 line wrapper enables a global ets.

> Also, if you squint a bit, you'll see that many libraries you're working with are essentially exposing a GenServer interface, with a few extra features.

I never said you shouldn't be using GenServers. I said you shouldn't write GenServers.


> I've never written a genserver in any professional project I've worked on

Wow, that's quite surprising! We use GenServers quite extensively at work. The most interesting usage is probably this application, which uses GenServers to model the state of real-world hardware (in this case, transit stop countdown clocks): https://github.com/mbta/realtime_signs/blob/main/lib/signs/s...

Curious what you use instead of GenServers. Plain processes? Agents? Something else? GenServers are probably overused but I still reach for them when I need a concurrent bundle of state with a specific interface and behavior.

I think we're generally trying to write more GenStages than GenServers at the MBTA these days, but that's mainly because we're moving towards being event-driven. GenServers still have their place.


I use other people's GenServers. Never agents. Tasks for concurrency (but almost never those either, just Oban). Ets for stateful datastore.

Pedantically I guess gen_statem isn't a genServer, but I'm generally referring to the pattern, which gen_statem is.

Mostly webdev, so, that stuff is all taken care of for me.

MBTA is modeling reactive systems that exist in the real world, so yeah, that's a place where i would be surprised if you didn't use genservers, though I suppose you could just use a database-backed state machine. Surprised you use GenServers instead of gen_statem (though you might have been, as I was, less than pedantic)


Ah I see. Yeah, Phoenix definitely gives you a lot out of the box. I think for webdev it's probably enough. This is a pretty straightforward Phoenix app for example, only uses a couple GenServers: https://github.com/mbta/arrow

The question of when to introduce databases is a really interesting one and also one I'm still coming to terms with a few years into writing Elixir full-time. Many of our apps don't use them at all, but I think at least a few could benefit from more structure and durability in their datastores.


I see after I posted you edited to mention Agents, Tasks, Oban, and gen_statem.

What do you use gen_statem for? I'll admit its intended use-case has always seemed pretty narrow to me, but maybe I just don't understand it very well.


I don't use gen_statem, its clunky as hell, does a weird thing with response aliasing, and it's even harder to read than Genserver. Plus I don't need it.

I understand why the callbacks in gen_statem are so messed up though, they want you to be able to organize your code by state. Since Erlang and (weakly) elixir requires you to group handle_x callbacks linearly in the module, you can't group by state unless you have a `handle_anything`. It's all so messed up :(


Are you actively trying not to use GenServers in your application code or is it just that you have a set of 3rd party dependencies that remove the need?


Yes


There's no reason not to write GenServers if you need them. Just don't use them as a class replacement :) But if you need to model some simple runtime state or concurrency, that's what they're there for.


use Tasks for simple concurrency, Jose has built an awesome module for us.


Oh I agree! I have a couple of gen servers acting as very simple message queues—the kind that don't matter if they are nuked during a deploy.


> Also, if you squint a bit, you'll see that many libraries you're working with are essentially exposing a GenServer interface, with a few extra features.

Like LiveViews :)


Yep this is definitely a problem, but fortunately the culture/community around Elixir generally tends to frown on unnecessary macros. I heard the saying, "if it can be a normal function, it should be" many times, which pleases me.

Overall though this isn't something I've generally run into too much with Elixir.


> Yep this is definitely a problem, but fortunately the culture/community around Elixir generally tends to frown on unnecessary macros. I heard the saying, "if it can be a normal function, it should be" many times, which pleases me.

While Elixir people usually say this, they don't at all follow it in practice because of the terrible example set by all the libraries. Phoenix is especially egregious and I can't believe the grandparent didn't bring it up.

Looking at Clojure's solution that is (as far as I can see) very popular it makes me annoyed at the relaxed attitude towards macros in Elixir and how bad it actually is:

Elixir and `Plug.Router`:

    defmodule MyRouter do
      use Plug.Router
    
      plug :match
      plug :dispatch
    
      get "/hello" do
        send_resp(conn, 200, "world")
      end
    
      forward "/users", to: UsersRouter
    
      match _ do
        send_resp(conn, 404, "oops")
      end
    end
Clojure and `reitit` (https://github.com/metosin/reitit):

    (def router
      (r/routes
        [["/hello" {:get (fn [r] {:status 200 :body "world"})}]
         ["/users" {:name :users
                    :router users-router}]
         ["*" {:get (fn [r] {:status 404 :body "oops"})}]]))
basically everything in the Elixir case is a macro and non-standard syntax whereas literally of the components of the Clojure case are just standard data types, and yet it manages to be more readable.


> basically everything in the Elixir case is a macro and non-standard syntax whereas literally of the components of the Clojure case are just standard data types, and yet it manages to be more readable.

“Readable” is mostly a matter of “fits the grooves already worn in my brain”; to me, the Elixir there nonstandard-macro-based-syntax-and-all, is vastly more readable than the Clojure.


> “Readable” is mostly a matter of “fits the grooves already worn in my brain”; to me, the Elixir there nonstandard-macro-based-syntax-and-all, is vastly more readable than the Clojure.

I'll admit to being able to read Lisp syntax without choking on the parens, but I don't even use Clojure (or any other dialect that uses special syntax for the different collection types) and I think it's way more readable and malleable (you can literally do whatever you want that makes sense to this data and it'll work, meaning you're free to write whatever middleware you want that takes and produces the expected data).

I've used Elixir since 2015 so I have no problem actually reading the syntax at all; I just think it's factually much worse than the Clojure example because it's basically just a bunch of macros you can't do much with and lots of implicit behaviors.

This also applies to Elixir's greater position in the BEAM ecosystem. I think the idea was that Elixir was going to be a boon to Erlang and other languages in there over time but I think it's demonstrably pretty useless as a BEAM citizen except for some of the patches they've submitted; most big Elixir libraries can't even be used outside of Elixir because they're full of macros and don't even provide interfaces with just functions.


> I think the idea was that Elixir was going to be a boon to Erlang and other languages in there over time but I think it's demonstrably pretty useless as a BEAM citizen except for some of the patches they've submitted

Not sure whose idea it was but I would say this is a very inaccurate take. Of course, we have taken much more from Erlang than given (that's expected from hosted languages), but I personally sit on more than 100 PRs to Erlang/OTP. I made binary matching several times faster by using SIMD. I improved the compiler performance (including Erlang programs) by (conservatively) 10-20%. I contributed compiler optimizations to reduce memory allocation. There is a now faster sets implementation which I contributed. Erlang/OTP 26 ships with faster map lookups for atom keys based on a pull request I submitted. The new logger system in Erlang takes ideas from Elixir logger. The utf-8 string handling in Erlang is based wholesale on Elixir’s (and extended for lists). And this is in no way a comprehensive listing [1].

And this is not counting everyone else in the Elixir community who contributed and those who are now actively working on Erlang tooling and were on-boarded to the ecosystem via Elixir. The Erlang ecosystem now has a package manager [2], used extensively by both languages, rebar3 and Mix, all implemented and powered by Elixir.

And then there are efforts like the Erlang Ecosystem Foundation, which is made of developers and sponsored by companies working with both languages, and it delivers important projects around observability, documentation, and more. For example, you can find several Erlang projects now using ExDoc for documentation, such as Recon [3].

That’s my take. What are your demonstrations that we are pretty useless for the BEAM ecosystem?

1: https://github.com/erlang/otp/pulls?q=is%3Apr+author%3Ajosev... 2: https://hex.pm/ 3: https://hexdocs.pm/recon


Hell i could add more. The current best json lib in erlang is a literal (called out for it in its readme) copy of the Jason elixir one in erlang.

My hyperloglog erlang lib is one of the most advanced in any language (behind what Omar Ertl does for Java) and i come wholesale from elixir.

I have my own code, as an elixir person, in OTP, for the formatting of float to string. The maybe construct was only possible because elixir with showed the way.

I could probably keep going.


Elixir is a fantastic effort, and the criticism the GP comment made is in no way justified. I'm sorry you are taking flak as an open source programmer for something so far from the truth.

There are lots of silent fans of your teams' work. Please keep it up, and please feel proud of your collective outputs!


Elixir was the kick that Erlang needed for a long time.


Honestly, I don't care if we were a kick or not :) but saying we are "demonstrably pretty useless" is completely unfair and bordering on being a bad faith argument.


> bad faith argument

It is. That they have not replied to any of your responses shows they don't have much argument.

I mean using "factually" in this subjective opinion is just ridiculous.

> I just think it's factually much worse than the Clojure example


> most big Elixir libraries can't even be used outside of Elixir because they're full of macros and don't even provide interfaces with just functions.

I was just reading the documentation, where it says "remember that macros are not your API", and smirking. If you write macros and things depend on them, such that if you change the macros, things will break, macros are your API. Syntax is API.


Sure. But there were improvements to the ecosystem. Telemetry is a great example, though they ended up porting it to Erlang so that both Erlang and Elixir projects can use it.


Maybe it's just down to familiarity, but I find the Phoenix example more readable than the Clojure one. To each their own I guess.


`conn` is a magic argument isn't it? Like an implicit argument to this block of code?

That is one thing that really bugs me, is I can't automatically see where the arguments are coming from. I realize it makes code a little more verbose, but a requirement around explicitly showing the dependencies used is much more useful when you're refactoring, for instance, or scaling code.

not a hard requirement by any means, sure, and it definitely doesn't mean something won't scale as it were, however I am constantly frustrated by languages that allow for implicit scoped dependencies / arguments.


Yes, it’s a bit of a magic argument, but it’s the only unhygienic variable introduced with a Plug dispatch:

https://github.com/elixir-plug/plug/blob/main/lib/plug/route...

Everything else is explicitly passed, AFAICT.


Actually I think plug is mostly fine since it only brings in one macro (plug) that isn't an http verb, or http common concept (forward e.g.).

My huge beef with plug is that it creates an implicit conn variable in all of the plug bodies.

But yeah, phoenix is a combination of good choices and some real stinkers, and I did mention it.

Edit: I forgot about match. match "isn't great".


>My huge beef with plug is that it creates an implicit conn variable in all of the plug bodies.

What do you mean? That only happens if you use Plug.Router; otherwise, you have to define a call function that takes the conn as the first argument. You can even go as far as explicitly calling the init functions for each Plug manually. It's init(opts), call(conn, opts) all the way down.


yes sorry, I meant Plug.Router macro bodies. The rest of plug is fine.


Not to mention that the macros should be invoking regular functions, making it easy to skip macros if you so choose.

When I pivoted from Ruby to Elixir, I was expecting to do a lot more metaprogramming, but I found that regular functions with pattern matching already does a lot that I want for many cases.


Mark my prediction: as the big "non-core++" libraries get bigger and trashier with their macros, and more used by the community, hex.pm will become littered with even more trashy macro libraries, and people will stop saying "don't use macros".


Eh, there is also plenty of indication this may not happen. :)

- Many of the non-core++ libraries are considered done

- Many popular and upcoming projects have zero macros: Decimal, Jason, Telemetry, Mint, Finch, Req, etc

- All machine learning projects have either zero macros (Axon, EXLA, Bumblebee) or the macro layer is built on top of the functional API (Nx and Explorer)

- Phoenix LiveView, which is the most recent project from the Phoenix team, is very minimal on the use of macros

- Most of the Phoenix sub-projects have zero-macros: phoenix_pubsub, phoenix_html, phoenix_ecto, esbuild/tailwind, etc

I mentioned this in another comment but in my opinion Phoenix is low on the use macros. They are present in the endpoint+router but then you are working with functions. I speculate the reason why they stand out is because they are the entry-point of your code. But even then the socket API in the endpoint is being replaced by a functional API thanks to contributions from Mat Trudel and the new verified routes are way less magic than the previous route helpers.


> Decimal, Jason, Telemetry, Mint, Finch, Req

These are all truly fantastic Non-core++ libraries. Also some really good ones on the horizon like Bandit.

But I'm love elixir and I'm not calling out the good ones. I'm calling out the less good ones. Because I see a lot of young libraries that pick up on those patterns, and so I can only believe it's spreading (can't be bothered to actually quantitfy, sorry). I don't think I'm bringing a curmudgeon. Legibility and debuggability are real concerns.


I’m pretty sure that the libraries you consider core++ such as ExUnit and Ecto are considered feature complete and won’t get any bigger.


Correct, but remember core++ seeded atrocities like ExUnit.CaseTemplate.

Anyways my prediction is about Non-core++ libs, which I listed in gp.


I was never fully pleased with ExUnit.CaseTemplate. If you have suggestions I am all ears. :)


just yeet it and have people do it manually. To be honest, it's not fatal for the ecosystem. It will just be slightly less enjoyable over time.


I'm sleep deprived. Would not it rather be "start" saying "don't use macros", given that macros are "bad"?


I haven’t used commanded, exmachina, or ash:

- Tesla has a mode which can be used completely without macros, and I am increasingly encouraging that it be the only way that it is used. So does the author (as of 2020): https://github.com/elixir-tesla/tesla/issues/367#issuecommen...

There is also `req` mentioned in a recent post as an alternative (it looks good, but I am still playing with it to see if it is a suitable replacement for Tesla in all cases).

- Absinthe is something of a compiler itself, because it has to strictly define things the way that is specified in the GraphQL spec. You can now import an SDL file, but you still need to hook resolvers and middleware into it. Honestly, I don’t think that the schema definitions in JS/TS are much better for GraphQL in terms of readability.

Being heavily macro-based means that there are sharp edges that are harder to work around when you want to add your own macros for code reuse purposes. That said, aside from the schema definition, Absinthe is entirely usable without macros. Within the schema definition, Absinthe isn’t making anything up, it’s using the same basic definitions that the GraphQL spec do, adapted for Elixir syntax.

Exmachina didn’t interest me because I don’t think much of factory_bot (which used to be called factory_girl), as I saw it abused far more than used well (IMO, it’s impossible to use correctly). Ash…looks like an interesting experiment, but I don’t know that there’s a lot of pick-up with it compared to Phoenix. And I have yet to find a use for CQRS/ES, so there’s no reason for me to play with commanded. I certainly wouldn’t consider any of these three to be "major" players in Elixir. Tesla and Absinthe? Yes.


Yes, sorry didn't mean to claim all of the libs do all of the sins: absinthe definitely didn't make up words.

The circular compiler dependencies of absinthe really ticked me off. Was on a project where one small change elsewhere would mysteriously trigger absinthe recompilation which would trigger even more recompilation. Might or might not been fixed in one of the newer compile-slim elixir releases


> now most recently ash

I've been using Ash in a side project and it's slowly growing on me. You have to twist your brain a bit to grok it (and I haven't, not fully).

Pros:

- a lot of things happen where you want them to happen.

I found that writing stuff with Ecto involves more boilerplate than with Ash because you have to write your queries, and your changelogs, and your API accesses, and wire them together, but what if you want a separate API.... With Ash it's a single resource with a few macros.

- quite a few escape hatches into regular Elixir

You don't want magic behind actions? There are several ways to hook into what's happening, or even write everything manually. And to the calling code... it will still look the same

- Despite all the macros the errors are usually quite good and pinpoint the problems

Cons:

- too few people who truly understand Ash

- documentation is often too terse and concise. But you can't really blame the author who is doing a lot

- errors are not always good :)

I do agree with you on overreliance on macros in parts of the ecosystem


I am working at a company that uses ash and have a data science side project that doesn't.

The data science project did a sql query that I couldn't figure how to optimize in SQL and I turned it into simple elixir code. Queries that took minutes now took seconds.

Now, I know that ash can do both but I don't know how I would diagnose that problem or test alternatives in ash, because it hides way too fucking much of the underlying code.


I guess you'd do it the same way: look at logs, write an optimized query manually :)

But I do get what you mean.


As someone who loves Elixir and knows it, this is one thing that really makes it hard to learn Phoenix for me personally. I have struggled learning Phoenix because it is so heavy with macro upon macro that it feels like learning another language.


I have some bad news for you: elixir ecosystem is consultancy-driven ecosystem. While at least better than bigtech-driven ecosystem (cough cough), the changes you seek are less likely to happen because

1. They make it easier for consultancies to templatize what they do, which:

- keeps their employees tied to the hard earned knowledge

- keeps their clients tied to their services

At least though you know that those patterns are (usually) common and battle tested (or, at least, will be battle tested - cough cough ash), and unlike a bigtech "their problems are likely to be your problems".


> - keeps their clients tied to their services

I have heard this claim so many times but it is easy to prove it wrong given how much we focus on documentation and understandability around Elixir and main libs. I have had clients asking me questions and I would answer it by improving the docs or writing a guide, committing it to the repo, and then asking the client for feedback. Half of the Ecto guides were written like this.

It is not about keeping clients at all. It is about having a great on-boarding (and beyond) experience.


I second the notion that the Elixir documentation is generally great, especially the libraries and ecosystem bits. It is clear it is taken seriously and cared about. The level of pro-activeness of the Elixir core team is definitely something that's noticed and personally draws me to the language.


Jose you're also kinder than most in the sector of consultancy.

I'm not saying that these libraries are deliberately designed to be malicious. I'm saying the incentives don't always align. And some of that has to creep into the mindset of some of the design.

Also I wouldn't level the "crept into the mindset" accusation at Phoenix core or ecto or absinthe even, because for the most part they conform to expected terminologies.


is the claim Ash is not battle tested?


The a-ha moment when you grok "Elixir is actually a Lisp" is very exciting indeed. It's a beautiful language that fits all of its elements together so nicely.


That's a pretty good write up.

There's a fine line between making things easier and more readable and having less boilerplate code, and pouring a bit too much magic and macros into things.

I'm always curious to see what people do with Elixir. The BEAM environment offers a lot in terms of a platform, but if you're just using it to serve web dynamic web pages, Rails is going to be a better pick in some circumstances because of the huge number of libraries, people who know it, documentation, hosting and the rest of the ecosystem.


> Rails is going to be a better pick in some circumstances because of the huge number of libraries.

This isn't really as big a problem as many people think. I've been out of Rails for a while now but haven't really felt the pain of missing anything with Phoenix. I've noticed there aren't often Elixir wrappers around third party APIs, but those are really not a big deal to create a small version on your own with only what you need. I always used to wrap the wrappers anyway! As far as UI stuff goes, there are actually a lot of really nice vanilla JS libs out there which play really nicely with LiveView's JS hooks.

I would be interested to hear what people have been sorely missing, though.

EDIT: Said "are" when I meant "aren't"


Oh, I don't think the library thing and ecosystem is really an obstacle for using Elixir so much as it makes Rails a 'safe bet'.

If I'm putting on my business hat, I need a reason to use Elixir instead of Rails and that probably has something to do with a better concurrency story, but I'm always curious to hear what other people are getting out of it.

My previous experience with BEAM was using Erlang in a semi-embedded environment and it was fantastic for that. Solid, dependable, deterministic, great response times... all of that. But while we did have a web interface to the system, it wasn't a web site.


For us its the simplified production environment. We use BEAM's clustering capability to forgo things like redis and/or a message queue. It probably doesn't matter for every company, but it helps for us where we have zero full time ops people.


That's a pretty good reason. Some people might react "well, you're going to need those things broken out anyway as you scale" or something to that effect, but it can be really nice to just not need them until you have scaled further. There are plenty of smaller applications where you just don't want or need all the extras and the longer you can keep it simpler, the better off you are.


Ya, this is one big bonus in my eyes. I have certainly hear the argument several times that "installing redis is really easy" and it is! But it's another moving part that often isn't necessary with BEAM.

In a current project I'm also able to use OTP to create a single microservice for my monolith that runs on its own node and the two can communicate with run-of-the-mill message passing without the need for an HTTP API.


> Some people might react "well, you're going to need those things broken out anyway as you scale" or something to that effect, but it can be really nice to just not need them until you have scaled further.

Dist messaging and mnesia scale pretty far. Although it must be noted that they were original built around rock solid networking (two nodes in one chasis), so if your networking is flakey, you have to handle that; my recommendation is to make your network not flakey, but that's easier to say than it is to do.


I would pair BEAM with Postgres, not Mnesia as that's going to limit you in a lot of ways. Unless you reaaaaaally know what you're doing.


Well, I like to think I know what I'm doing. And I've participated in some big Mnesia operations.

That said, redis is in memory key-value, and Mnesia does in memory key-value too. If you're using Postgres's relational stuff or data bigger than memory, that's a different thing. Mnesia has some relational facilities, but I dunno if anybody uses them much; mnesia can use disc_only_copies, but that's not a good time. Of course, memory only gets you pretty big tables these days; I ran some mnesia nodes with 768GB, but a couple TB isn't too hard to put together these days.


The BEAM's multi-node scaling works really well. The argument that you'll need to break things out if you get big enough doesn't really hold water.


BEAM isn't magic scaling sauce, and you're going to run into trouble if you get big enough. But that's a "good problem to have". And it's probably 'enough' for many businesses as-is.


It's not magic, but neither are any of the alternatives. There needs to be a compelling "my constellation of services written in a variety of languages, communicating via a half-baked homebrew collection of API calls beats a constellation of distributed Erlang applications communicating via message passing over the BEAM" to claim that Erlang has scaling issues relative to the alternative.

The reality is that asynchronous, parallel, distributed computing is hard, Erlang makes it easier. Not easy, but easier.


And even if you need to break things out, you can build different variants of distributions that can all still work together.


And if you forgo those things you can easily test them.


My reason to use Phoenix instead of Rails: development speed. I'm just so blazingly productive in Phoenix that nothing else compares, including Rails. Every time I'm forced to develop in another framework I now get agonisingly frustrated by how long it takes to get anything done.


Your experience matches mine. And ChatGPT has been a game-changer - if there isn't an Elixir library for some third-party API, GPT is very good at generating some wrapper functions in Elixir using the API docs.


Where I previously worked we used Elixir as the backend for a early-stage social app.

We only really encountered the lack-of-libraries issue a single time (we needed a Neo4j driver that supported their hosting platform).

But aside from that, Elixir was a great choice and served us well. When our app went semi-viral all we needed to do was increase our instance size in AWS, never really needed to worry about horizontal scaling thankfully.

Our app also needed a lot of realtime messaging and updates, which was genuinely a breeze with Phoenix. I don’t think I could ever to back to trying to finagle a websocket API in Python.


If anything, scaling Elixir apps favors vertical scaling. Horizontal scaling starts hitting the limits of dist erlang (since, unless you restrict the topolog, every node connects with every other node).

It makes for an interesting time when deploying to Kubernetes. But we figured out how to increase the number of cores per erl node during peak daily traffic.

I’m working for a Nodejs shop now. For all its use of async stuff, it simply does not scale as well as Elixir (not to mention the deep dysfunctions in the Nodejs ecosystem).


If I recall the things I've read on this in the past, the horizontal scaling limitations of distributed Erlang are encountered around like 500 to 1,000 nodes? Which I have to imagine means the large, large majority of shops using Elixir and scaling horizontally will be just fine, as they are typically in the size of 2 - 10, or maybe 20 nodes.


There is a reason stuff like Partisan were conceived - https://arxiv.org/abs/1802.02652

If you do a lot of cross node talk with say, the Phoenix channels, then lowering the total number of nodes helps (n*(n-1) for a full mesh). And since the BEAM scheduler will try to use up all available cores by default, past a certain threshold of nodes, it makes more sense to use smaller numbers of nodes with more cores per node. When you also have say, an observaility agent taking up 1000 mcore, doubling the instance size makes a difference. In the last shop, we would idle at 4-cores, jump to 16-cores instance sizes for the morning rush, and go back to 8-cores for the late afternoon traffic. I had tried to horizontally scale it with 4-cores machines, but saw a lot more instability and performance issues.

This is one of the advantages with using Elixir or Erlang. You simply don’t get this kind of operational characteristics with Ruby, Python, or Nodejs.


Phoenix is obviously “the big thing” and no other language/framework combination has something as good as LiveView. Rails cannot compete there. Other than that, building Livebook apps for our customer success team has been really great and easy.

There’s quite a big push for ML in Elixir right now, and then there’s also Nerves for embedded programming, which I’ve never dabbled in but it looks nice.


LiveView is definitely Elixir's killer app. It's what takes Phoenix from "a really good web framework" to "the singular best web framework", in my experience. Ecto is also a best-in-class data-layer adapter (ORM seems wrong and reductive).


I'm primarily a Go developer who loathes ORMs. I have to admit that Ecto is pretty great.


Do you have any write up about Livebooks for your CS team? Sounds like an interesting use case!


> no other language/framework combination has something as good as LiveView

You just have to look. Blazor does what LiveView do and on upcoming. NET 8 it will leapfrog it.


As someone currently using bleeding edge Blazor at work, and Phoenix on my own time, this is wromg to the point of comedy.

The big thing that LiveView has going for it is that it transforms the client session into just another process to communicate with, using the same message passing tools as the rest of the language, with very little headache because concurrency is already an expectation.

Blazor can never achieve this, because C# is not a concurrency focused language.


People have been saying this about Blazor for years, and it's never reached that point.

Where can I see what .NET 8 will bring to Blazor to change this?


ecto is also just an amazing db wrapper in general. you basically write sql in elixir which means you can optimize the crap out of your queries and its easy to steop down to actual sql where needed via fragments.


And you can still use composable queries if you still want to (equivalent to Rail’s named scopes).


The transparent parallelism makes it an amazing fit for pretty much anything and everything that doesn't require minimum system resources and microsecond-level of response times.

1. Web apps work awesomely because they tolerate sudden bursts of load much, much better than Rails. Rails deployments regularly fall over on bursty workloads unless complex load-balancing setups have been put in front of them.

2. Which brings me to: operations / infra story is much simpler, and it is thus cheaper and easier to maintain Elixir apps, even when they span multiple nodes (because Erlang's builtin mechanism to do mesh networking / distribution works well enough for 99% of the projects out there).

3. Complex service trees f.ex. scrapers that also present their data in a visual CMS backoffice UI and background tasks infrastructure -- all of that can live on literally the same node, share the same code repo and be very easy to manage and maintain as a result.

4. Oh, that also reminds me: almost anything and everything you would need Redis for, you can have it natively with Erlang/Elixir.

5. Corollary to point 3: long-lived daemons that are not even web apps / APIs work really well under Elixir due to the supervision tree guarantees of Erlang's underlying OTP framework. If you configure it well (the defaults are not always optimal) you can have your invisible mesh of services survive extended outages of the 3rd party APIs it depends on.

6. In the last years, LiveView and Livebooks are amazing additions that help severely reduce the need of JS and provide an alternative to Jupyter Notebooks, respectively. Though it has to be said that LiveView works best when the latencies to the backend aren't too big, say, 200ms or less. Beyond one certain human perception threshold it starts to feel sluggish however.

I could go on and on and give plenty of examples from my career. I've replaced a lot of shell scripts and PHP / Python / JS services with Elixir, and each migration was a crushing success in every way. We're talking 15+ such projects. Only 1-2 of them got back to their original language simply because the companies / teams were too risk-averse and didn't want to hire Elixir devs (or train their current people). But the network effects are always a chicken-and-egg problem; if people never want to bet something on the tech then it obviously is never going to be appealing to them.

There are many who made the plunge though and are happy with the results.


> If you configure it well (the defaults are not always optimal) you can have your invisible mesh of services survive extended outages of the 3rd party APIs it depends on.

This is something that annoyed me a bit with OTP. The basic strategies aren't really enough for that, so you need something like https://github.com/jlouis/fuse

I wrote something like that myself, but it hasn't seen a ton of use: https://github.com/davidw/hardcore


Well it's pretty normal to write such addons to the OTP, depending on your needs, but yeah I'd love it if a few more of them were builtin.

In my comment above I was simply addressing the fact that you can e.g. configure 500 retries before the supervisor gives up as opposed to the normal 3 (or was it 5?).


Having done both Rails and Elixir, Elixir is vastly easier to scale. Not just because it can use up all the available cores, but also because a REPL in a production environment makes troubleshooting and identifying bottlenecks easier. When I scaled Rails, it was a struggle, and the epiphany I had when I had to modify how Sidekick worked was that Elixir and Erlang had built-in primitives to do what is major refactor of Sidekick to get Sidekick to do what I want.

One of the things I had success with in a non-web applications is writing Kubernetes Operators in Elixir. Though I also want to experiment with the embedded Prolog-in-Elixir for those.


> the harder something tries to convince me of anything, the more I expect it to suck.

That's me vs web development in general and especially in the JS ecosystem. Other, similar red-flags: hand holding, "best practices", "anti-patterns" and other voodoo and cargo culting.

The most suspicious one is prettiness:

- custom DSL syntax without data representation

- cosmetic wrappers that are just thin layers without substance

- "everything is an X" for the sake of faux consistency


- cosmetic wrappers that are just thin layers without substance

If you've ever tried to write keyword lists in erlang.... The keyword list cosmetic wrapper is fucking amazing, not only because it makes it easy to read, but it makes it easier to debug and easier to one-glance see "this is a keyword list".


> - custom DSL syntax without data representation

This is an interesting comment in a post about Elixir because the Elixir ecosystem is actually full of things that should just be data but are instead macros being used in modules. Phoenix in particular is an especially egregious example of a library that is badly made in almost every regard

> - cosmetic wrappers that are just thin layers without substance

This one is also funny because Elixir itself is little more than a fairly small set of convenience libraries, a syntax and a very good tool for project management (`mix`) on top of Erlang. You can skip the syntax and the convenience libraries for the most part and actually just use Erlang in a `mix` project and get the best of both worlds. The convenience libraries usually are bad makeup on top of Erlang in many cases and you'd be better served just learning the Erlang way anyway.


Phoenix is popular partly because people like its API and its convenience macros. It won out over Jose's own framework, Dynamo which he scrapped in favor of becoming a core contributor to Phoenix. Its easy to make yourself sound wise by complaining about macros, but this largely boils down to personal preferences, and a lot of people have different ones from you.

For most of the people who actually use Phoenix, the DSL makes intuitive sense and so there isn't much learning curve, and it is such an important library to applications that use it that its worth taking the time to really learn it, whereas it might not be the case for other libraries that are macro heavy. Most libraries don't go that route though.


I know erlang better than I know elixir but I still prefer to use elixir when I have the choice.

It may be "little more than" convenience, but especially consistency decisions like always data-first functions plus the pipe macro is already a monumental ergonomic leap over erlang. I love erlang but, like php, it is a language where I have to look up the argument order and config options of every function every time I use it.


> Phoenix in particular is an especially egregious example of a library that is badly made in almost every regard

Can you elaborate on this? Why do you think this?


The interface to almost everything either contains or is based on a macro. This persists even in cases where you can clearly see there needed to be no macros at all.

The non-standard (won't work with anything else) and basically superfluous websocket protocol has bad fundamentals and has been marred by bad JavaScript code being needed for it. On top of that it's pretty bloated.

The actual channel processes could've simply followed a standard Erlang interface but someone (...) felt they needed to put their fingerprint on it, I guess, so it kind of doesn't. And yes, again everything surrounding channels is a bunch of macros until you at least get to write and use normal functions in the actual channel module.

They've had a long history of arbitrarily deciding to hide settings/configuration from dependencies they use and having no actual argument for doing so when asked to expose them.

If the part about macros being overused wasn't bad enough Phoenix also promotes this type of thinking by overusing them even in generated code, so people can follow the bad example.


> The interface to almost everything either contains or is based on a macro.

This is not true. The endpoint and router are macro based, but once it is routed, it is all functions. Your controller actions are functions, calling the view/template is a function, your channel is functions, the model/context layer are functions.

When it comes to behaviors, we do our best to follow Erlang/OTP conventions, but deviate where it makes sense. For example, channel has “join” instead of “init” because you can refuse a join but you can’t refuse an init. They have different return types and augmenting init could be confusing if later you were to write a plain GenServer.

It is easy to look from outside and say “someone wanted to put their fingerprints on it” but actual discussions were had in pretty much all of these topics. I suggest asking around. If you asked for something and we had no actual arguments, then please point me to such occurrences. I will be eager to clarify (and as a rule we generally do).


Great reply José, obrigado.

Since you're here, can I ask why Elixir doesn't have an explicit `return` statement? I do miss it sometimes.


Any answers to Jose's reply?

If not, your criticisms seem to hold no water.


Elixir is probably my favorite dynamic language and a lot of that has to do with the community that sprung up around it. I wish it were more popular.


I don’t wish it to be super popular.

I heard the argument before that JS (and Nodejs) is popular (top 3), so you should go into it.

I’m now in a nodejs shop. Turns out, the way Erlang and Elixir is designed serves as a filtering function for hiring. People who just wants to code without putting a lot of thought into it, don’t go into Elixir and land in JS instead. (All the current coworkers I have wants to make things better but we are saddled with code that has been … oddly written)

At the last Elixir shop I was at, I was part of a small crew that was more experienced as a whole than other shops I have been at. And I liked it.


Popularity is a multiple faceted issue w/r/t quality of engineers in any given language.

One caveat to any of it is the easier a language is to grok and the more use cases it has, the more likely it will attract all types of developers and there will always be more "bottom end" developers than "top end" developers, as is patterned out in any large distribution.

That said, here's some facets to consider about this

- Its not likely its the language (or to be charitable, not just the language) at play here. As noted above, the more popular something is, the more likely you end up with a wider varieties of folks at very different backgrounds / skill levels etc. Python and PHP have similar issues.

- The more popular the language is, is often correlated to its use cases. For example, JavaScript is hugely popular in a wide swath of problem domains, from the server, to the web (its de facto monopoly right now) to desktop apps to even now mobile (Ionic, NativeScript etc). This leads to the top developers in the language often going to very lucrative positions that one would also presumably have to be working alongside or as part of to be working with them

- Its harder in a bigger pool to work with the top N % of developers in any given language or (better yet, any given environment / problem space deploying that language as part of its core technological backbone). The big "sea" in the middle is what you're most likely to be apart of

I think smaller communities benefit from organic interest, people who are enthusiastic to use the technology and therefore often the perceived quality of those developers is higher (and often it does correlate, see: the history of Clojure)

I don't think a language in and of itself is to blame, except in that maybe, because some languages like Python and JavaScript are so easy to pickup as opposed to say, Java, Kotlin, C# or Rust, that they attract more developers in general. That said, I think its the laws of distribution at work, and not really the driven by the language per se.


The syntactic sugar for the list of tuples (aka the Keyword list), such as: [one: 1, two: 2] instead of [{:one, 1}, {:two, 2}] and the other ... do: macros drove me crazy when I first learned Elixir. But as soon as I familiarized myself with it and a few other gotchas, Elixir managed to become my favorite and main language.


Seems struggling at the moment. Archive link: https://web.archive.org/web/20230803145324/https://wiki.alop...


Disappointed. I thought it was going to describe a way for cynical curmudgeons to get out of their rut.


Elixir is cool, I used to use it for web app backends before adopting TypeScript and Rust. The reason I adopted them was the same reason I stopped using Elixir after a while: type safety. It was getting increasingly hard to be productive in Elixir in a large application due to runtime type errors that are caught in statically typed languages.


Strange, I cannot recall the last time I ran into one and I write a lot of Erlang. Both Erlang and Elixir share the "Let It Fail" philosophy, additionally it is also highly encouraged to use pattern matching, guards, and error-handling techniques to deal with potential type-related issues and errors.

On the other hand, I get it all the time with Python, for example.


I've found that pattern matching in the function head and guards get you most of the way there.


Yes however I get that in TS and Rust while also having stronger type guarantees. I don't think pattern matching is an alternative to static types, it's a supplement.


Well, I suppose to each their own. I use languages with and without static typing. I love Ada the most though, because the way you define types is very different from the most widespread way. In Ada, you define the range.

Examples:

  type Day is
    (Monday,
     Tuesday,
     Wednesday,
     Thursday,
     Friday,
     Saturday,
     Sunday);
  subtype Business_Day is Day range Monday .. Friday;
  subtype Weekend_Day is Day range Saturday .. Sunday;
  subtype Dice_Throw is Integer range 1 .. 6;

  type Arr_Type is array (Integer range <>) of Character;
  type Color  is (White, Red, Yellow, Green, Blue, Brown, Black);
  type Column is range 1 .. 72;
  type Table  is array(1 .. 10) of Integer;

  type Device_Register is range 0 .. 2**5 - 1 with Size => 5;
  type Commands is (Off, On);
  type Commands2 is new Integer with
    Static_Predicate => Commands in 2 | 4;


Rust, sure. Love me some Rust.

Typescript's type system isn't sound and you can still get runtime errors. (Type systems should be bomb-proof or just get out of the way.)


Is there pattern matching in TS function heads?


How tested were your applications?


I'm really enjoying getting to grips with Elixir, but I'm still really in two minds about Ecto having a macro-based SQL ORM.

Like any ORM, the abstraction leaks, but it really throws you off course when you're trying to map the host syntax to raw SQL.

   from(t as SomeTable, where: x.z == ^y, join: x on assoc(t, :y))
Like, it's macro enough to make SQL feel more first-class...but not enough to actually make it first class, so you've gotta learn Ecto's quirks.

It's also chainable so it's a macro over a query builder that eventually resolves to SQL at runtime.


That was a good read, well done. I've been using Elixir for about a year now and I just smile when I'm coding. I'm still a little unsure about it's place and where it'll go, but I have loved exploring and learning BEAM with a little nicer syntax. Jose is great and watching the introduction of a type system is interesting too


I've landed on SvelteKit and classless CSS for most of my hobby projects.

I am interested in Elixir though, especially Phoenix, but found it to be difficult to manage and get started with in Windows.

Still looking for the easy way into learning to work with web sockets as a 41 year old man with very little spare time.


I'm using Phoenix for my side project on a Windows machine. It hasn't been too problematic for me. Maybe something changed since you've tried it.


There used to be a couple pain points when working with Phoenix on Windows. The bcrypt library wouldn't build, something needed to be run in Administrator mode to establish their symlink workaround, and PostgreSQL is a little more irritable on Windows. SASS (some node dependency in the Javascript asset managing stuff) also flat out didn't work on windows, but there was a drop-in replacement.

I know the SASS issue was solved (as a side effect of eliminating the node dependency), and I believe the bcrypt issue was, too.


What is "classless CSS"? Just using style attributes?


Probably just a css file that styles the html elements only without using any classes


No style attributes. You just use HTML markup and use a classless CSS framework to take care of making it look nice. My favorite is Marx, but there are others you can find here: https://github.com/dbohdan/classless-css

Water.css, MVP.css, sakura, and Tacit are among the most popular.


I've always liked Pico myself.


Use virtuabox and make a vm with ubuntu, install elixir and connect via ssh. This is what I did when I wanted to try it out but only had win8


In my opinion, this is just too much effort these days.


WSL2 should get you going?


That's what I would suggest as well. WSL2 and use asdf[1] to manage the erlang/elixir versions.

You could also use containers pretty easily as well with docker or podman for windows and the official Elixir images[2].

[1]: https://github.com/asdf-vm/asdf

[2]: https://hub.docker.com/_/elixir


WSL2, while amazing technology, is a resource hog and still a bit buggy in my experience. I use it w/ linux from time to time when I don't have a choice, but when I'm coding, it just feels like a hyper-bloated IDE.


If I volunteer at a nonprofit and I'm trying to run a team of volunteers who would...

  1) probably not stick around for long
  2) need to be trained from scratch
  3) not have a solid development background
Would Phoenix be a good choice, given that you can do so much with it without needing to learn many other tools, or would it be better to start with something like Supabase + a javascript front end framework and hope we'll never have to deal with monsters like kubernetes?


I would probably go with Javascript and Supabase, this would require you to only really worry about the frontend. Javascript is also probably easier to train people because of how many resources there are out there about training people to write Javascript. Supabase is also has a great free tier which will help your nonprofit. Hell, you might even want to try to contact them and they might give you a better tier for free they are really friendly there.

Elixir is great, and developers love it but training new people without a solid development background might be hard.


phoenix does not sound like a good fit


Elixir seems to benchmark as half the speed of python, which would put it about 100x to 200x slower than C++. That's a lot of speed to give up for the silver bullet promise of immutable data structures (which I've never understood the appeal of in the first place).

https://elixirforum.com/t/elixir-vs-python-performance-bench...


It's unclear why the techempower gives low performance for elixir / phoenix. But from what I've heard is that in real world applications the performance tends to scale much better than Python or Ruby.


it's because the people who contribute to techempower benchmarks are zealots who believe that it's far more useful and important to show their One True Language in the Best Possible Light than to give you a useful benchmark. take a look at the source code for some of them and see for yourself.


The link is about execution speed, I think if you start talking about 'scale' that's a diversion, since that isn't about the language and is more about the architecture of the program.


These benchmarks have been thoroughly studied by the elixir community (even myself) and are... Trash.

They compare things that have nothing to do with production, use non realistic code for most language and keep refusing PR to fix it for elixir.

The community simply decided to stop trying to engage and let that code die.


It looks like it got HackerNews'd. The stack seems to be Happstack, a Haskell lib.

Perhaps if it was served by Phoenix... /obvious comment is obvious


You're correct.

It says at the bottom: powered by https://github.com/jgm/gitit

Readme states that: "Gitit is a wiki program written in Haskell. It uses Happstack for the web server and pandoc for markup processing."


I usually just blindly upvote anything with Erlang/Elixir in the title, but this one I read in its entirety over some Pho

Excellent writeup!


elixir seems cool and i love its tooling but the more i use datomic, the less i want to spend my time dealing with SQL and its baggage unless i absolutely have to.


what would prevent you from using datomic with elixir? https://github.com/edubkendo/datomex

there are other datalog-y tools as well, ie https://github.com/naomijub/translixir


i lost my enthusiasm for building a jenga tower of niche community libraries


that is literally clojure development


you can get pretty far with cognitect's libraries and java interop


Lisp user here. I couldn't see what he was talking about re: "Elixir is a Lisp" and the "why" was pretty hand-wavy, but the author does seem pretty stoked about whatever he/she saw so good for them.


It's got good macros and the implementation of both regular code and macros operate on the same core datastructures.

It has the interactive features for inspecting and modifying a running system that Lisps often have.

It has reflection APIs

Its ints are big like Lisp

etc.

Seems pretty Lispy to me.


hygienic macros doe not make a lisp, but.... it get you very close to the spirit of the thing, i think that's why the author was excited,( and it's a good reason to be.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: