Hacker News new | past | comments | ask | show | jobs | submit login
Why I program in Erlang (2012) (evanmiller.org)
325 points by ianbutler on Nov 1, 2022 | hide | past | favorite | 136 comments



I discovered Evan Miller at the same time I started using Erlang, around 2009/2010. I used Chicago boss, his Erlang framework, in three projects.

The third project was a complete rewrite of a really badly created dating website written in PHP that would crash at least once a day and would be unbereablly slow, built by 5 people for a full year. We rewrote everything in 3 months, with two developers, while adding features that were impossible to do otherwise. One lighting bulb moment is when we realize we could use ejabberd and the xmpp protocol for way more than messages and chats, but also to handle our news feed. Once we launched the rewrite, the website became extremely fast and stable. Since then, the BEAM has always been my go-to platform, even if now I use elixir.

There is no platform/language that makes me feel more productive and confident about the code I write. The fact I can introspect the code (the repl is way more than a repl), the crazy stability of the runtime, and the simplicity of the language makes it a no brainer.

LiveView is a game changer. We built our entire e-commerce stack on it, it runs flawlessly.

My only issue with the Elixir/Erlang ecosystem is its lack of business oriented libraries and solutions: - not many payment gateway integrations - no CMS / E-commerce platforms - quite no ready to use open source apps

Yes it's not as fast in pure performance as many other programming languages, but in practice, it's fast as hell due to concurrency and performance doesn't degrade with load.

And so many exciting things are being developed, from LiveView based native apps to ML and IOTs.


> The third project was a complete rewrite of a really badly created dating website written in PHP that would crash at least once a day and would be unbereablly slow, built by 5 people for a full year. We rewrote everything in 3 months, with two developers, while adding features that were impossible to do otherwise. One lighting bulb moment is when we realize we could use ejabberd and the xmpp protocol for way more than messages and chats, but also to handle our news feed. Once we launched the rewrite, the website became extremely fast and stable. Since then, the BEAM has always been my go-to platform, even if now I use elixir.

This matches my experience with one of the few opportunities I had to use Elixir in "production". I took over as a replacement "technical co-founder" (in actuality: a part-time contractor) for a startup, and inherited a Node codebase that would take an hour or so to batch process a bunch of student data every night. The Elixir replacement I wrote did it in minutes (even before actually making any effort to parallelize it), and it thus turned into an hourly job and eventually near-realtime. I probably could've made similar gains while sticking with Node, but it would've been a lot harder, and the rewrite ended up creating a lot more opportunities for feature development.

Unfortunately that startup failed to raise funds, and it got put on permanent hiatus once the founder's runway ran out. I also nearly became an exception to the "nobody is ever fired for picking IBM" rule when I migrated the hosting from Heroku to Bluemix (the latter deciding to eat our Postgres container for lunch, prompting a mad scramble to migrate yet again to AWS). Fun times, fond memories.


Hey that’s awesome. I’m also learning elixir and I deployed ejabberd for chat. I switched to phoenix channels for chat to have a lot more control. What are your thoughts on that? Any suggestions or articles or code repos you can link me for inspiration?

Fwiw I was using firestore for real time before since it was easier. Now I like phoenix channels (I’m still struggling with the language truth be told, but it makes sense after reviewing at the code like eight times).

My app (social app, still wip) is currently using flutter for the mobile apps, a nodeJS backend for the rest APIs, and a phoenix server for the chats. I’d be interested to have a discussion if that’s okay.


Hi @mradek. This ejabberd system was around 10 years ago, before websockets. Haven't built a chat system since but Channels would be perfect for it, especially in a multi platform context. If not I'd go simple and just use LiveView on the front and pubsub on the back.

The ejabberd system was great at the time but lacked flexibility. But I think we saved at least 3 months of dev thanks to it.

I see you are using 3 languages, that's a lot of things to coordinate and keep in the brain. May I ask why you use node for the rest API and not elixir ? That would simplify things a lot.

Sure you can email me at jm at producture dot com


Thank you, I will def reach out.

I am using 3 languages because I originally started this project using nodeJS+pg backend + flutter mobile apps. I've been experimenting with a lot of stuff as well, at one point switching to Go because I had some CPU heavy stuff, but I find myself going back to nodeJS because I'm most productive in it and I feel that its performance (I/O wise, since I'm doing DB or API calls) to dev time ratio is pretty good especially considering the number of well made dependencies and SDKs available.

I am still a noob at elixir (only a few days of total experience), so maybe some day I might migrate the REST API over. However, Phoenix channels are amazing and for that piece of the overall system I think I am happy to have stumbled upon it. I found that using phoenix channel API is a lot easier to work with both from server perspective and flutter client side. I need to figure out the proper way to auth (right now just auth'ing on socket connect against my nodeJS API, then continue).

My backend is super modular, feature wise, so I can swap out any piece for something better easily.

I been working on it since May :D


Zotonic is an Erlang CMS. https://zotonic.com

We have payment gateway solutions (stripe, mollie, buckaroo). See: https://github.com/zotonic/zotonic_mod_payment

It has a full blown MQTT communication bus. Browsers connect as mqtt clients to the cms via a web socket.


I was really blown away by how well erlang/chicago boss handled throughput when I first gave it a spin. A webpage I converted over handled multiple orders of magnitude more incoming requests before failing than the php version it replaced.

CB would get frustrating when wanting to extend certain functionality however, which phoenix isn't quite as bad about.


We’ve built our own CMS in Ex with the interface in LiveView and an admin. It builds pages as a single json document comprised of a list of blocks and components, which map to elixir functions and can render themselves.

It’s one of those things that we want to release when we get time as think the primitives will be useful for people.


Slightly off topic but what happened to Evan Miller? He hasn't posted on his blog since 2014 and had a YouTube channel that is also now defunct



After programming in elixir/erlang for ~8 years (for fun and professionally), I have to agree. I've had to dig into the "why" of BEAM a number of times, and I can't remember ever being disappointed. The decisions that they make might be frustrating at the time ("why does passing a shared sub-binary through a process prevent it from being GC'd") but I always come away from the docs with the understanding of how the design decision was the right one in the context of the larger system, which is very pleasant. They just took a handful of requirements around concurrency and fault tolerance, and all decisions logically followed from those. BEAM is not good at everything, but it is good at the things it wants to be good at, which is refreshing.


What kind of product would warrant the cost/benefit of using erlang? Where did you use it? Just curious


Things I would use Erlang for:

- embedded systems (network switches, control systems)

- middleware (databases, message brokers)

- generic web services

Why I would use it:

- it has a lot of operational support tools to remotely monitor and debug a running production application built into the runtime.

- it's not necessarily the fastest language, but it's pretty fast for many things. The thing is, if you program in idiomatic erlang from the start, your application will naturally scale well without having to do anything, and will even handle overload gracefully.

- Erlang and Elixir are both well designed and expressive languages with few flaws and intelligently made tradeoffs. It's an enjoyable experience.


RabbitMQ is a good example of middleware


Erlang is really good for programming highly-scalable and fault tolerant distributed systems. It was designed by Ericsson (large Telco manufacturer from Sweden) for running on Telco equipment that has extremely stringent uptime and availability requirements.


For Elixir, I really like it for webapps and services.

- Ecto (data mapping) is fantastic, really glad they separated the SQL part - Oban is fantastic (job processing) - Liveview is really good. (frontend framework that uses websockets to build realtime apps that doesn't require writing JS (uses JS under the hood though)).


Add to that the sensible date and time primitives baked into the language as well as the super powerful and ergonomic cldr library for i18n.


I'm working on iot cloud to which you can connect millions of devices (to single server) and it's very easily extensible to different device protocols (we make a driver for each device to expose all of them in unified way). Erlang is perfect here, because it can easily support lots of connections with low latency and you don't need to restart anything when you make changes, which makes development much easier.


> After programming in elixir/erlang for ~8 years

Are Elixir and Erlang very similar? Do they belong to the same family?


Elixir compiles to bytecode for the Erlang VM (BEAM). I would put it in the same relationship category as Clojure and Java or Kotlin and Java. Different language but there's a lot of synergy in the libraries/ecosystem/etc. If something goes wrong in your Elixir application it would be a really good idea to understand Erlang and BEAM.


I can't speak to Kotlin, but Clojure : Java :: Elixir : Erlang doesn't seem like a great analogy. Clojure and Java have very different philosophies, while Elixir has virtually identical semantics to Erlang.


Kotlin : Java :: Elixir :: Erlang

Clojure : Java :: LFE : Erlang


Mayyyybe. LFE is still much closer to Erlang than Clojure is to Java.


Yeah, I learned LFE a few years back after years of (sadly only) hobby Erlang programming. I don't think anything in it surprised me. It felt very much like: What if Erlang's syntax were replaced with sexprs and we used Lisp-y style macros? Clojure's relationship to Java is nowhere near the same.

Knowing Erlang + Scheme and CL, LFE like exactly what I expected, with a strong Erlang vibe. Knowing Java + Scheme and CL, Clojure felt nothing like Java except that it could access Java classes and objects.


Fundamentally the BEAM is very opinionated with a limited range of possible language designs that would work with it. No mutability, e.g.

I love the opinionated, constrained nature of Erlang and the BEAM, vs the kitchen sink approach of Java and the JVM.


Maybe Scala : Java :: Elixir : Erlang


I'm not too sure - scala really has a crazy number of language features, which makes the scope of it way bigger. I'd say kotlin : Java is closer :)


Actually, scala ain’t that big of a language, especially not compared to kotlin. Scala just prefers a bit more complex, but full feature over adding many less complex features that cover the same deal, which is more similar to Kotlin.


At its core, Elixir is effectively a wrapper of Erlang but with small tweaks/wrappers of the core library, Ruby-ish syntax, piping with `|>`, Mix project management, Hex package management, ExUnit for testing, and several other quality of life improvements. Elixir can easily call Erlang code, which is not easy the other way around. The biggest difference is that Elixir has a vibrant community, big projects like Phoenix, LiveView, and Livebook.


I would even say Elixir IS Erlang, so is LFE.


what are the things it is good at?


Amazing for:

- Web -> Phoenix ( https://www.phoenixframework.org/ )

- Parallel data processing (like scrapers, ETLs, etc...) -> GenStage / Flow / Broadway ( https://hexdocs.pm/gen_stage/GenStage.html )

- IOT -> Nerves ( https://www.nerves-project.org/ )

Soon:

Mobile Apps -> Elixir Desktop ( https://github.com/elixir-desktop/desktop )

ML -> NX ( https://github.com/elixir-nx )


Presumably, concurrency and fault-tolerance?


concurrency, general purpose computing, excellent developer ergonomics and tooling. Great community.


> general purpose computing

As a huge Erlang fan, I’m not certain I’d say “general purpose computing” is a strength of it though.


Python and Ruby are general purpose computer programing languages. Is there anything they do extremely well that erlang/elixir, can't?

From the following:

https://www.erlang.org/faq/introduction.html

    1 What is Erlang
    1.1  In a nutshell, what is Erlang?
    Erlang is a general-purpose programming language and runtime environment. 
    Erlang has built-in support for concurrency, distribution and fault tolerance.


server side computing is probably my take - it does handle more than that, eg. nerves but server side computing is the big win.


another example of Erlang for concurrency -> MQTT server (i.e., https://github.com/emqx/emqtt)


> ... [I]n my experience it is rarely necessary to refactor Erlang code in the same way the object-oriented code needs refactoring from time to time.

That... depends. If you work with only highly disciplined people who want to do the right thing always, then yeah, I agree. But if you work with people who have deadlines or need to get stuff done _now_, you could see functions that take unnecessary arguments, or excessively large records, and those sorts of things are hard to work with.

I think this person is an excellent engineer who loves to code and is really good at it, but hasn't worked with Erlang in a professional capacity. Which...

> I have spent a large chunk of my free time programming in Erlang

ding ding ding


I was thinking the same. At this point it's hard for me to believe that a programming language can provide that much productivity. You always find counter arguments from people who have worked in very large teams that use language-in-question. The counter arguments also tend to over-correct - just because one company has a mess written in <lang> doesn't mean <lang> is bad.

I've been writing a lot of plain JS in my free time lately and it's been awesome. Just like the article, I've written all kinds of things in it and it's been a breeze. I don't think this is because JS is great - it's just because I have a lot of practice and I don't have to worry about real business requirements, deadlines, other people, or other people's code. If these side projects of mine became team projects where each member has varying skills, interests, and experiences, I'm sure it'd become a mess - especially if I quit!


For some applications it is quite hard to overstate how much easier Erlang (or in my case Elixir) does make things. For a web backend, the concurrency approach means that you can handle each request as synchronous, "blocking" code (the easiest to reason about). In practice the GenServer system means that you're probably doing a lot of sending messages to a lot of other processes[1] and waiting for responses, but it all looks and feels like function calls and returns.

Similarly I use Nerves for a hardware control application. I have an operation that's doing a multi-step bit of communication where the actual comms is handled via an IC on an SPI bus. You build your layers of driver, abstraction and then eventually application code over the top, and I end up with a really clear top level function that basically looks like:

    configure_radio()
    send_request(flags, data)
    wait_for_done_flag()
    read_response()
All of this works perfectly, doing SPI transactions, waiting for responses and all operates perfectly on the same VM that's also monitoring some registers, as well as running a live web interface for introspection, handling graphql requests etc.

And then if there's an error case that I haven't handled the code that communicates with the IC will all restart and reconfigure the IC, so that transaction will fail, but the next one will likely work fine.

[1] Processes here using the BEAM terminology - a lightweight userspace scheduled thread of execution. Similar to green-threads, fibres or goroutines.


Refactoring is (or should be) caused by changing requirements, not by the class becoming "too big". Changing requirements come from many places, but tend to be more common in line-of-business applications than free-time programming IME.

That is to say: I agree with you, and I think there's some correlation between "software at the mercy of changing business requirements" and "software written in popular class-oriented languages".


The original definition of re-factoring was changing code to improve the quality without changing the behavior. Going by that definition anything you do in response to changing requirements is by definition, not re-factoring. I guess it's just come to mean "changing code" these days, but I think this is regression in terms of expressivity.


I was being brief above. I agree with you, but allow me to elaborate: I see refactoring as one step in changing code to accommodate new requirements. The refactoring is caused by the changing requirement, but isn't enough by itself to fulfil the new requirement.

> improve the quality without changing the behavior

IMO, quality cannot be judged without reference to requirements, except in extreme cases. Within a broad range of "code that doesn't make your eyes bleed", comparing one approach to another is difficult to do without reference to how it specifically fulfils the requirements. When requirements change, they may reveal that your existing code doesn't fulfil them well, or doesn't easily generalise, can't be reused, etc.

So when you need to change the code, it may first be necessary to do some refactoring. But refactoring without a prompt, without new knowledge entering via changed requirements, is just fiddling with aesthetics.


> without new knowledge entering via changed requirements, is just fiddling with aesthetics.

or simply reducing technical debt


The requirement being “we need to maintain thing thing!”


which is not necessarily a requirement popping up just before reducing technical debt.

Most likely this requirement was there from the beginning (serious project) or introduced later (like a prototype that had to go to production). But refactoring can occur multiple times at any time


Yes that is my point too. it is a continuous requirement. you could have a boss knock on your cube everymorning and say “new requirements! if any packages need upgrading for security today or if there are any customer bug reports today work on them!” but it would get boring.


> is just fiddling with aesthetics.

Commonly known as 'making your code more maintainable' so that the following programmers who work on that code do not find your address, look you up and murder you for your crimes against humanity.


Every maintenance task is a requirement, in the light of which you should evaluate the quality of existing code! Maintenance isn't just something that hangs around in the air: it is made up of specific goals and activities.


Yes and no. Removing an unused parameter, for example, seems like something that's wise to do for the sake of code maintainability even if one has no specific future tasks/changes in mind. Likewise things like right-length variable names and proper separation of orthogonal concerns. The way I see it, any kind of future maintenance will involve understanding the current code, so unless you expect the code to be abandoned soon, refactoring to improve comprehensibility is legitimate. (Otherwise why even keep the source code at all? If you take the position that there's no point in maintainability without specific maintenance goals, why not keep only the compiled binaries?)


I content there's something quite different between e.g. removing an unused argument or renaming a variable (both of which should be entirely automated, and hardly count as refactoring) and separation of concerns, which concerns the purpose, architecture and operation of the code.

I heartily endorse tidy code, good naming, etc. I would put that, though, in a separate category from refactoring.

> Otherwise why even keep the source code at all? If you take the position that there's no point in maintainability without specific maintenance goals, why not keep only the compiled binaries?

"Access to source code" is a maintenance goal, it just usually goes without saying and is rarely an issue. It also doesn't involve making any changes to the code, so I'd say it's also in a separate category to refactoring.


> renaming a variable (both of which should be entirely automated, and hardly count as refactoring)

Renaming is absolutely refactoring since refactoring is a process, not just individual acts but often a series of refactorings to achieve some goal. For instance, you may be trying to make two (or more) pieces of code look the same so you can replace them with a single implementation. A stupid kind of thing I saw and addressed recently (pretend these are more useful, simple examples for demo):

  func some_fun(f *os.File) string {
      var contents []byte
      contents = make([]byte, 100)
      count, _ := f.Read(contents)
      return string(contents[0:count])
  }

  func same_fun_different_name(conn net.Conn) string {
      buffer := make([]byte, 100)
      n, _ := conn.Read(buffer)
      return string(buffer[0:n])
  }
These ought to be one function. They do the same thing but have different names, different names for internal variables, and different type signatures. But the types are actually unimportant, they both conform to the `io.Reader` interface. But the code doesn't quite look the same, so if it were longer we might have a hard time believing the two functions are actually doing the same thing. So step 1 of the refactor is rename and reorganize the function internals so they match. Let's decide the second function's form is preferable (we flipped a coin):

  func some_fun(f *os.File) string {
      buffer := make([]byte, 100)
      n, _ := f.Read(buffer)
      return string(buffer[0:n])
  }
Repeat as much as necessary and then you end up with one piece of code in the end that covers both cases (and many more):

  func better_named_fun(reader io.Reader) string {
      buffer := make([]byte, 100)
      n, _ := reader.Read(buffer)
      return string(buffer[0:n])
  }
All thanks to refactoring by renaming.


> I content there's something quite different between e.g. removing an unused argument or renaming a variable (both of which should be entirely automated, and hardly count as refactoring) and separation of concerns, which concerns the purpose, architecture and operation of the code.

> I heartily endorse tidy code, good naming, etc. I would put that, though, in a separate category from refactoring.

I don't see the distinction. In either case you're trying to change the code into some equivalent code that's easier to comprehend; separation of concerns may involve more subjective judgement than applying consistent formatting to your source code (though I've seen plenty of subjective arguments about the right way to format code), but they're both the same kind of work.

> "Access to source code" is a maintenance goal, it just usually goes without saying and is rarely an issue.

I've never seen it considered a "goal" for any usual definition of goal. Rather keeping access to the source code is something you do as a means to an end; you want to make sure you can easily achieve future maintenance goals, even if you don't yet know what those goals are. And I'd take the view that refactoring to make your code more easily comprehensible is valuable in the same way and for the same reasons.


Not in any disagreement with anything you said.


There's a narrow definition where you may refactor code to allow meeting new requirements, but it doesn't change the behavior for the existing calls.

Like, if you go from one type of sort to needing a bunch of different sorts, you might need to make the sorting pluggable, but that shouldn't change the observable behavior of the existing sort type, assuming it gets to stay. That would be a refactoring, plus whatever sorts you needed to plugin.


> for each desired change, make the change easy (warning: this may be hard), then make the easy change

https://twitter.com/kentbeck/status/250733358307500032


> Refactoring is (or should be) caused by changing requirements

This can be a compelling reason to refactor, but my experience is that refactoring is often called for without/before any requirements change. Typical cases:

- Existing requirements, and interactions between them, are better understood over time, presenting opportunities to serve them better

- Current design/APIs are brittle and adapt poorly to maintenance tasks

- Resistance to “premature” abstraction/DRY produces large scale redundancies, and consequently increases volume of bugs and time spent versus value of work

- Of course the opposite is true, with improper abstractions becoming unnecessarily entrenched and unwieldy

- Ecosystem/stdlib improvements make certain areas of custom code obsolete, or more of a liability than they’re worth


I'd argue that in all of those cases, the ultimate objective is (or should be!) to serve the requirements. Even maintenance tasks are in service of requirements - if not customer requirements, then internal or environmental ones. Performance can be a requirement too. If requirements aren't changing, the code shouldn't need to change. And if the code doesn't need to change, it doesn't need refactoring.


The objective is to serve the requirements. The code, inasmuch as it “works”, doesn’t need to change. But the cases I enumerated are ones which frequently enable me to better serve requirements than making no changes. If a given bug fix takes N hours because the underlying code is hard to understand and maintain, it only takes a certain multiple of N before it’s prudent to invest some of those hours into making future fixes possible in N minutes instead. At a certain point, precluding changes on principle is directly opposed to serving requirements.


I completely agree with you, and I'm not sure how we're managing to talk past each other. I guess in trying to strongly emphasize a position against "pre-emptive refactoring", maybe I'm coming across as being against ever refactoring or changing the code at all?


From my point of view, the comparison to object oriented code is to compare how abstractions hold over time and over changes of features naturally.

OO requires the programmer to make assumptions that create a maintenance burden. Elixir/Erlang organizes its code with modules, which leaves it less burdened by the constraints of OO. Consider the module system like a tree. You import functions that are wrapped in a module and just call them. Concurrency is baked in so cleanly that you only write single threaded code inside of a process.

Reducing the argument down to not having deadlines and not being used for "real" purposes is a bit reductionist and misses the point.


I use Elixir for work, and have led the adoption of it and therefore the teaching of other engineers.

I have learned that in order to get people to produce good Elixir code, the important thing is to get them to think about the data structures first and then the functions: how your information is represented on the way in and out, what your state looks like etc.

Even something as simple as parsing a request, get them to start from "ok I have a binary as my input and then it's going to produce a map/tuple/whatever". Once they think like that (and have some understanding of which datastructures work well where) then the functions you need fall out quite easily, they tend to have roughly the right set of arguments, and modules tend to separate quite cleanly


Refactoring without a strongly typed language is a pain.


That can be the case, but I find static typing to be more beneficial for refactoring than strong typing.


You forgot to add the tech debt contributed by people who either can't code well or don't care. Those two often go hand-in-hand


I don't know that there are many people in industry who can't code or who don't care. I think there are lots of people who may not share in the peculiarities of my opinions, or who need every minute to deliver requirements, and who feel that "working" is sufficient.

Is that wrong? I'm hesitant to say that it is. We're optimizing different things. Their approach is better in some situations - startups need to ship features fast, and if you don't have to touch the code again, it doesn't need to look good, it just needs to work. My method is optimized for sustaining engineering by adding up-front engineering time, which startup engineers may not have.

I think the most appropriate thing is to realize what situation you're in, and make value judgements there. If speed is required, optimize for speed. If sustainment is required, optimize for sustainment.


"The pessimist complains about the wind; the optimist expects it to change; the realist adjusts the sails."


I am surprised this isn't here already, I've used it twice already to (re)learn Erlang over a few years, and now that I'm reminded about it I'm going to read it again (I should pay this time!):

https://learnyousomeerlang.com/


My hot take: Anyone who's new to the BEAM/ERTS/OTP ecosystem should just go learn Elixir instead: You get to learn Erlang for free, and you get to use nicer build system, package manager etc..


Am I wrong, or does it feel like Elixir has lost momentum over the last couple of years? I'd like to spend more time becoming proficient with the language and platform, but it seems increasingly hard to justify professionally over something like Go, which usually does concurrency well enough when required. Are people having success finding Elixir jobs in 2022?


I think you are incorrect. - Go is more popular but really is for a separate problem space. - The Elixir community is very active on Slack, and elixirforum and there are more jobs now than there's ever been.

The elixir community has been focused on a few key areas.

- The nerves project * An open-source platform and infrastructure you need to build, deploy, and securely manage your fleet of IoT devices at speed and scale.

- https://www.nerves-project.org/

- Phoenix Framework and Ash-hq etc which has a strong emphasis on web development/and generating APIS

- https://www.phoenixframework.org/

- https://www.ash-hq.org/

- Data Science/ Machine learning / Data Infrastructure

- https://github.com/elixir-nx

- https://livebook.dev/

- https://elixir-broadway.org/


How the Nerves website has absolutely zero information about what it is, and only holds very vauge points of "Scalable", "Adaptable", "Secure".

Its not until you get to their documentation you can figure out what it is: "Nerves defines a new way to build embedded systems using Elixir. It is specifically designed for embedded systems, not desktop or server systems." Why is this not front and center?

Also to find a single code example, you need to look it up online, because its only in their github repo? https://github.com/nerves-project/nerves_examples

Why do so many websites nowadays just NOT tell you what they are attempting to sell you? Its so weird and frustrating.


Seeing that its open source... its not really trying to sell you anything, now is it? :)

also, the page says one thing.

    Nerves is the open-source platform and infrastructure you need to build, 
    deploy, and securely manage your fleet of IoT devices at speed and scale.
There's two CTA on the site.

- 1) GetStarted - 2) What is Nerves?

Both then proceed to tell you exactly what nerves is and funnel you to code examples, guides, and case studies.


> does it feel like Elixir has lost momentum over the last couple of years

Elixir is essentially feature complete now. So what you might be feeling is the slower pace of language development due to this.

https://elixir-lang.org/blog/2019/06/24/elixir-v1-9-0-releas...


As someone in the ecosystem, I've found a noticeable uptick in job postings the past few years. I haven't applied for such jobs so I can't personally attest to it.

After gaining some recent experience in Go, I'd still choose Elixir for any web project. Phoenix/Ecto/Absinthe/LiveView are all extremely productive libraries for web projects. Tooling/infrastructure work may be another discussion.


I've been using Elixir since almost the beginning and my impression is that Elixir is bigger and better than ever. When I started using Elixir, there was only one local start-up using it in my area. Now, I can walk to at least seven companies with over 100 employees that build their products in Elixir. There are also many smaller companies and start-ups using it.

Maybe Elixir isn't as ubiquitous in other areas, but where I live, I'll be able to retire as an Elixir engineer, which is awesome since it is hands-down the best language I've ever used.

For comparison, I've coded in Erlang, Java, C#, C, C++, Scala, Clojure, JavaScript, Python, Ruby, Perl, F#, Node, Go, Lisp, Elm, Scheme, PHP, Dart, Swift, Objective C, Groovy, Pascal, D, Cobol, Assembly, and CoffeeScript. I think that's all of them anyway.

If you're interested in Elixir, learn it. I'll hire you if you're good and I have any openings.


Since getting an Elixir job a year or two back and thus putting it on my resume, it has been surprising about how many messages I get for Elixir jobs. They're all over the place.


> I almost always leave with the impression that the designers did the “right thing”. I suppose this is in contrast to Java, which does the pedantic thing, Perl, which does the kludgy thing, Ruby, which has two independent implementations of the wrong thing, and C, which doesn't do anything.

This might be the best line ever I read on an HN article in recent years.


I think it’s a little harsh on Ruby. But, yeah, I agree.


I don't know anything about Ruby, but I'm curious, why so?


I feel like the quality of ruby libraries has drastically improved over the last 10 years (article was written in 2012). Lots of older libraries, Open source devs would just develop their own DSLs that solve their problem. With the advent of rails, and rubocop, and well known conventions, good modern libraries have bubbled to the top to solve specific problems and the use of those libraries now are the "convention" for solving that set of problems.


The fact that C doesn't do anything is actually the right thing for what it was designed for :) It is just an assembler with macros.


ruby, ouch



This introduced me to Erlang, not much later I found myself watching all Joe Armstrong videos I could find and read his thesis [0]. It is so well written and unique in it's retrospective.

[0] https://erlang.org/download/armstrong_thesis_2003.pdf


Do you have any favorite talks of his you can recommend?


I personally like "The Mess We're In" https://youtu.be/lKXe3HUG2l4


Definitely! I also like "How we program multicores" [0], it made "iff you want the same guarantees Erlang gives you, you'll be in the same performance ballpark in any language, yes, C++ also" click for me.

Joe could explain the basic ideas and where they came from so concise and humble [1], you can probably just binge watch this whole list, one other gem being "The forgotten ideas in computer science" [2]...

[0] https://youtu.be/bo5WL5IQAd0 [1] https://youtu.be/i9Kf12NMPWE [2] https://youtu.be/-I_jE0l7sYQ?list=PLvL2NEhYV4ZsIjT55t-kxylCU...


My broad understanding of Erlang is this:

A single erlang node runs on a dedicated OS process. It consists of one or more Erlang processes, which are essentially Carl Hewitts actors, which are very lightweight, don't share any memory, and hundreds and thousands of them can be multiplexed onto the handful of cores of a typical server. The idea being that if one erlang "process" dies, the whole system doesn't come to a halt. There's also no stray pointers, because all communication is done through deep copying messages, so there's no pointers referring to the process of another memories process.

Where it gets murky for me is how it interacts with databases. There's mnesia which... I don't understand. Is it embedded? I know it can run on top of stuff like LevelDB and RocksDb, and those engines don't like you opening the same database multiple times on the same OS process. But how do you stop that happening?

Also how do you test the whole thing? You could test individual processes I guess, but testing the interaction between them all seems fairly impossible.


You can just use your favorite SQL database :)

With the Beam and things like Ets you can often avoid reaching for things like Redis though.


> There's also no stray pointers, because all communication is done through deep copying messages, so there's no pointers referring to the process of another memories process.

Doesn't this bog down from all the context switching and IO latency from message passing? Or does erlang have some special way of doing it


It can bog down depending on your communication patterns. Context switching isn't too bad, it's 'green threads' so there's not the OS overhead of switch threads when you switch between Erlang processes; it's not zero overhead of course. Messaging passing latency is mostly driven by memory latency and lock acquisition; if you're in a big system, message queue locks tend to be uncontested as there's enough things going on that it's unusual for two processes to message the same third process at around the same time. But you can't really get away from that --- regardless of your language, if you've got multiple threads of execution working on the same data, you've got contention and that'll increase latency; even lockless algorithms have contention on the atomic values, and you have memory communication between processors.

All that said, BEAM isn't chosen because of its high performance, it's chosen because it enables a simple programming model for distributed systems while being fast enough. It's gotten faster over time too; of course, so has everything else.


It's all green threads, so you don't have real context switches and the messages don't have to be passed between (real) processes. Of course there are downsides to that kind of approach: good luck calling a C library function that you have to pass a callback into, anything else where you need precise control over your OS-level threading is difficult (and while BEAM has a good reputation, there's always the risk of e.g. priority inversions happening), and if the VM scheduler doesn't handle your workload well then you're limited to whatever control they give you.


> Where it gets murky for me is how it interacts with databases. There's mnesia which... I don't understand. Is it embedded?

Mnesia is a application that in my experience runs on some of your Erlang nodes; it provides APIs and also a supervision tree that owns the data. Mnesia is a layer on top of key/value stores; there's some out of the box stores it can use, or there's bindings for LevelDB and Roc0ksDB I guess. When you do a Mnesia read/write/etc, that becomes (more or less) a message sent to an mnesia process, so I'd imagine the LevelDB and RocksDB APIs are only called by a specific mnesia process and there should be no attempt to use the same database by multiple Erlang processes, so you're good from a concurrency point of view.

You can use Mnesia from remote Erlang nodes, it's just sending messages, so easy peasy. However, in my use, we would typically put state management services up on the same nodes storing the data in Mnesia, so we'd send an application message to a process running on the node, which would turn that into mnesia reads/writes as appropriate, and send back a response if appropriate.

If you wanted to use an external database like some sort of SQL, you'd need to find a client and then you could interface with that like in any other language. You'd almost certainly want to put some sort of intermediate connection pool together though, cause most databases aren't thrilled with tons of connections (although there was an article today about going to 1M, so maybe). Our needs fit well with key/value though, so I never looked into that.

> Also how do you test the whole thing? You could test individual processes I guess, but testing the interaction between them all seems fairly impossible.

Everybody has a test environment; some people even have a test environment that isn't production. :)

Testing distributed systems isn't simple. You can test the individual processes, but the interaction between all of them can give rise to emergent behavior which is difficult to test. However an important thing to consider is the importance of testing is related to the cost of failures; if you lower the cost of failures enough, comprehensive testing is much less important. Isolation of processes means that in many cases the rest of the system goes on when one piece fails; of course, if that once piece is in the critical path, that's not true. Hot loading means you can update code without losing established state, which can significantly reduce the update cycle time compared to a system where you need to drain users and restart servers or even longer cycles like in mobile development or even lengthier cycles of traditional boxed software. If you were writing Erlang for boxed software, you'd need to spend a lot more time on testing than in a messaging server.


That's the one the 20-odd guys from WhatsApp used to run live communications by 100m+ users before being bought by Facebook, right?


yep.


There are 3 different levels of concurrency:

1. No concurrency

2. Concurrency on the same machine

3. Concurrency between machines

Of course there can be more distinctions, but those are very helpful for almost every case that I had to deal with.

Erlang's BEAM really nailes #3. The BEAM is just so cool, I'm surprised there are no similar products out there (if you know any, please tell me!).

Unfortunately for #2 I really don't like either Erlang nor Elixir. To nail 2, I found pure functional programming languages to be the best by far. They allow extreme precise control over concurrency on the same machine (be it single-threaded or multi-threaded), better than any other paradigm I've used (including actors for example). My preferred language here is Scala, but others are doing really well too.

I wish I could run Scala on the BEAM, since Akka (actors on the JVM) are just not nearly as good as the BEAM.


I am really interested in learning more about the libraries and abstractions you would use for #2 in Scala (or other FP languages). Could you provide some references so I can dig deeper? Thanks!


If you want to just learn it and have no prior JVM knowledge then probably Haskell is the best language to learn the concepts. You then want to first get an idea how to explicity describe and control effects (using the IO-type in Haskell). The next step is then to handle errors (with IO and Either types) and then how to handle effectful streams.

The combination of those 3 allows you to do things like "run those two processes at the same time, where one process does something every minute and the other when a request comes in. Then merge the two together, only execute one action per 30 seconds even if there are more events from the two combined processes. But if an action fails, retry it 3 times while waiting a few seconds between each try. If the 3rd try fails then, depending on the error, stall the process that runs every minute for 10 minutes and only accept events from the request-based process..."

And so on. I found that in languages that don't support those explicit (pure functional) concepts, it is almost impossibly hard to get the logic right when multiple levels of concurrency and error-handling come in.

If you have prior JVM knowledge or want to build something productive then I suggest to start with Scala and ZIO (https://zio.dev/) using the same concepts as described for Haskell.

Then afterwards, have a look into STM (https://zio.dev/reference/stm/ for Scala or https://hackage.haskell.org/package/stm for Haskell). Those essentially allow you to do things that database often do for you (fine grained locking) - just more powerful and composible and in-memory without database IO.

With those things you are well equipped to deal with problems of type #2.


> If you want to just learn it [...] then probably Haskell is the best language to learn the concepts

I do believe that the creator of Elixir (parent comment's poster) already knows the concepts. I might be wrong, though.


I am familiar with Scala and Haskell, but I was not familiar with ZIO. Thanks for sharing!


Oh sorry, I guess I misread your post. But yeah, Scala alone doesn't make the difference compared to Elixir, it's ZIO or alternatively cat-effect. Scala as a language just enables those libraries to exist and be used ergonomically. I don't see a reason why something similar couldn't happen for the BEAM, but there hasn't be a language that supports those concepts yet.


Discussed at the time:

Why I Program in Erlang - https://news.ycombinator.com/item?id=4715823 - Oct 2012 (93 comments)


It's interesting reading the comments referencing the then-fledgling Elixir from back then. Definitely no indication it would end up being a ton of people's introduction to OTP and the BEAM.


I always think it’s nuts Elixir was a small project Valim was working on that I tried to get some open source points for helping on in high school, and fast forward several years and I own books on it now. Wish I had something more meaningful still left in the codebase. I initially found Erlang/OTP a year or two before, printing out the pragmatic prog book in its entirety. It’s amazing how far the project has come.


BEAM seems to be designed to be very comfortable on NUMA hardware, which as cpu scaling continues to grind to a halt is likely to become much more attractive.


Elixir wasn't there when the article was written, I'm not using it but read elixir code and look charming


Elixir is basically Erlang with a strong focus on developer experience. It carries through the legacy of careful design in my opinion.


I'm writing some (mostly) embedded software with Elixir and it's...nice. Takes a bit of getting used to, but no huge downsides.


I really enjoyed this article.

I am interested in the parallelism and multithreading, asynchrony and things progressing independently (concurrency)

I wrote a multithreaded Java Actor implementation. It can send messages between threads at 19-100 million messages a second depending on the variation. There are variations that generate messages in parallel and in different threads, in advance or as the program goes. If messages are created in advance it can add 1 billion integers a second. This is due to the fact that each thread can add up a different range of the integers. For comparison a single thread on my computer can add 1-1000000000 in 2 seconds. This means it doubles the performance of adding.

https://GitHub.com/samsquire/multiversion-concurrency-contro...

I also wrote a multithreaded interpreter that uses this actor implementation to support a "receive" and "send" instruction. It can also send jump instructions to other threads with a "sendcode" and "receivecode" instruction for those threads to change their execution.


Great writer & engineer. Worth a re-read every couple of years before heading back to your blub codebase.


>> Erlang does not use a contiguous chunk of memory to represent a sequence of bytes. Instead, it something called an “I/O list” — a nested list of non-contiguous chunks of memory.

is this cpu cache-friendly ?


An `iolist()` or `iodata()` structure is mainly designed for I/O.

The idea is rather simple but powerful.

Let's say you want to send `"Hello " + name`, where `name` is a string, over the network. Traditionally, in C, you would allocate a buffer that is large enough and copy the "Hello " literal into it and then copy the `name` string before calling `write(fd, buffer, length)`.

If you wanted to avoid allocating the buffer and doing the copy, you would make use of `writev(fd, iovec, count)`, and this is exactly what the `iolist()` structure allows for. The erlang runtime system (erts) makes use of efficient `writev` calls instead of having to allocate temporary buffers just to send data over the network (something Erlang is notoriously good at) when you make use of `iolists`.


Definitely maybe. If your I/O list is a list of integers (the traditional Erlang string type), probably not.

If your I/O list is a bunch of binaries you got from here and there and you don't need to inspect them, just pass them along to another destination, then maybe yes. When BEAM writes an I/O list to an FD, it's going to use scatter/gather I/O and at no time does BEAM need to form a contiguous chunk of memory with all the data; what the OS does with that is up to the OS though.


> almost certainly will never win any medals for speed ... The language is slow, awkward, and ugly

I suspect the answer is probably moot.

But I also think that the answer would depend on what the bytes are being used for, how large the chunks are, etc. CPU cache utilisation is influenced by many factors, and can't be deduced from examining the data structure alone.


Probably isn't too bad. chunks probably fill a cache line. Then it is just a question of whether or not beam inserts prefetches for the next chunk, and or the CPU prefetch realizes we are chasing this set of pointers.

Note: I am more familiar with C++, so C++ digression here.

C++ has std::deque that is a similar non-contiguous chunked container, comparing it to std::vector(the contiguous container) it is really close for things vector is good at, and better than it at things vector is bad at like random insert and removal.

https://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vecto...


It punts that question to how you end up using it. If it lets you avoid blowing out your cache by copying huge arrays all the time, then it's more cache-friendly than using flat arrays. In most use cases you'll end up using chunks that are "big enough". It's definitely possible (unless the VM is doing something very clever) to get yourself into a pathological case where your tree degenerates into a linked list and your performance becomes awful - but that's something you can fix in how you construct your things rather than an inherent problem with the datastructure.


Elixir has a vector library Nx that’s designed for these use cases.


Another recent Elixir / BEAM convert here. I have touched on a few languages in recent years (Scala, Haskell, Rust) and Elixir is the first that has really got to sit up and take notice. Coming from pretty strict OOP I have found it quite hard, but ultimately it has really got me to sit up and take notice. Id also like to mention Gleam[1] which is a type-safe language running on the BEAM. It's equally friendly and enjoyable to use. The community has also been very welcoming. Also recommend the Thinking Elixir podcast[2} which is a joy to listen to (I am about 25 episodes in so far)

[1] https://github.com/gleam-lang/gleam [2] https://podcast.thinkingelixir.com/


In the usual case, Erlang does not use a contiguous chunk of memory to represent a sequence of bytes. Instead, it something called an “I/O list” — a nested list of non-contiguous chunks of memory. The result is that concatenating two strings (I/O lists) takes O(1) time in Erlang, compared O(N) time in other languages. This is why template rendering in Ruby, Python, etc. is slow, but very fast in Erlang.

This sounds like it may break cache coherence? Template rendering in Erlang may be faster than Ruby & Python but I suspect using non-contiguous blocks of memory isn't the reason.


" If I need something done, I have the excuse to do it myself, and I often make discoveries that I would not have made otherwise. It sounds dumb but it is true"

OK this isn't a discussion about the usefulness of the programming language anymore, just the author's idea of fun. Not having libraries sounds like a horrible thing productivity wise.


Last time I tried to learn Erlang I stopped immediately for 2 reasons:

- could not find a platform/runtime where to host it

- basic open source libraries are not available

It’s nice and all to be attracted to cool technology but we also have to be aware of everything else that goes with it: developer experience, oss libraries, community support, ease of use and so on


Have you checked out Elixir?

- Can be deployed to Heroku or Fly.io without much effort

- Well designed tools like mix

- There is a thriving community

- "basic open source libraries" highly depends on the context, but yes there are not as many as for more popular languages (yet)


Earlier this year we seriously considered Elixir and Phoenix for a large scale web platform (like in planning the project around this tech stack for a couple of months and even starting the implementation) but eventually decided to go with C#, ASP.NET Core and Microsoft Orleans instead.

There were many aspects considered, but if I remember correctly, the barrier to entry is way too high and new developers need to learn too many things at the same time. With this regard, the lack of a mature business-oriented ecosystem of libraries does not help. My overall impression interacting with the community was that there were either veteran programmers having worked with functional languages for 10+ years and working in teams of veterans (and therefore assuming you already knew plenty of things), or folks using it in toy projects.

Despite trying really hard, we didn't get the confidence that we would be productive and successful in this ecosystem as a company so we went back home to daddy Microsoft. In my view, with the introduction of Phoenix LiveView and such they are trying to market this tech as the easiest and most scalable ecosystem for the web, but their community fails to realize the extent to which people coming from an OOP background (especially Java, C# or even Javascript - which describes most back-end developers nowadays) are at complete lost in this world and have everything to (re)learn.

Another aspect is that even if this platform is very scalable from a technical perspective, functional languages in general are not that great for modeling complex business domains with evolving requirements: since pure functions are the key primitive, you end up needing to update or check plenty of functions whenever you introduce or modify a feature. And when you take into account the fact that Elixir is not strongly typed, you end up having to write plenty of tests to refactor with peace of mind (while a statically language would have provided you 99% of these checks for free).

I would only use it for the infrastructure layer of a system, not to manage actual business rules and business entities. But then, this means that you need to build a separate backend and an API. This is the reason we called it quit and decided to use a stack allowing us to do everything, and boost our productivity by removing the need to build an API. We were really seduced by the messaging capabilities of Elixir, but it just didn't work for us. Not just at a technical level but at a business level.

My takeaway is that if you have a team being already proficient in another backend language, whatever you may gain by building your product with Elixir may very well be lost by building your BUSINESS and your TEAM with Elixir. Even if you are building out a new team, there is a broader market of OOP developers available, and generally speaking, people have a much easier time learning programming through OOP concepts than through the mathematical abstractions backing functional languages like Elixir. Learning OOP with the abstract car and concrete cars tutorials is a breeze. From that point, learning a codebase properly modeled in an objected-oriented manner using DDD principles is a breeze. Learning typescript or even go was a breeze thanks to my C# background. However, learning elixir was quite a time investment, and I am quite senior having built a whole ERP system on my own and built and led multiple teams.

Elixir has true merits but it is definitely not a cure all solution, and too often people overstate how easy to use it is. It may be easy if you are already proficient with Elixir and functional programming but definitely isn't if you are new to this world. I'd treat it as an optimization and think carefully about whether or not one really want to use this stake. Truly outstanding from a technical perspective, but paradoxically not necessarily the best platform to build a business on (which took us a hard time to accept, but at some point we called it quit).

Believe what you see with your eyes, do not buy what people tell you and the hype around it. If you feel relief after modeling your problem space with it go for it, otherwise if you feel it is not working for you whether at a mental level or at an implementation level, follow you guts and move on. Do not waste as much time as we did. Not trying to take anything from the Elixir community but this had to be said to counter balance rosy comments that honestly sometimes seem to have no basis in reality. I doubt most of the people singing most of these praises on Elixir have actually tried to go all in with it and build a business on it.


I fully understand that Elixir might not have been the right choice for you and your team, but ...

I doubt most of the people singing most of these praises on Elixir have actually tried to go all in with it and build a business on it.

Lots have and some quite successfully

* divvy - https://getdivvy.com/blog/why-divvy-uses-elixir/

* discord - https://discord.com/blog/how-discord-scaled-elixir-to-5-000-...

* frame.io - https://medium.com/frame-io-engineering/elixir-open-source-f...

Both Divvy and Frame.io were acquired btw -

* https://techcrunch.com/2021/08/19/adobe-buying-frame-io-in-1...

* https://techcrunch.com/2021/05/06/why-did-bill-com-pay-2-5b-...


Nice summary. I think it all boils down to the fact that 99% of projects just don't REALLY need what Erlang has to offer - they don't REALLY need five nines of uptime, hotpatching in production (actually, the requirement is mostly the exact opposite!), or individual communicating nodes. And when the project really does need that, you can get pretty far with microservices today, which - arguably - implemented half of common LISP, khm, I mean Erlang - are supported by all the cloud providers. And anyway, if Stackoverflow can get by with a couple of .net and SQL servers, most anyone can. Maybe not Google, but are you Google? :)

Erlang is just an engineering marvel, but in the real world, engineering is always a compromise.


I think the other business risk here is can you find experienced Erlang developers to hire?


Anyone knows if WhatsApp is running Erlang or Elixir these days?


Erlang. They recently open sourced an Erlang type checker https://github.com/WhatsApp/eqwalizer and tree-sitter grammar https://github.com/WhatsApp/tree-sitter-erlang.



Needs a (2012).


Need fulfilled. Thanks!


I was going to say, I was surprised to see an article like this without at least some mention of Elixir.


[flagged]


As an engineer, you have a responsibility to care. Bad languages and bad design come with a whole host of issues, and are generally nightmares to maintain.


Good comment, so true. I do care what I and my team programming in. I just don't do for the author of the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: