Using Elixir here for a couple of months, for an app backend, and even for shell script type of projects. Great language, no surprises, and very responsive community -- really, my biggest wish is for more people to try it, as I think it's still relatively niche. Had no previous Erlang experience. While you don't have to write Erlang code itself, you'll over time become familiar with Erlang/OTP architecture concepts, such as GenServer, ETS, distributed nodes, and so on.
Was very happy to see this recent post from Pinterest [1] as a sign that some of the more "mainstream" teams are starting to look at it.
Also, a repost [2] of some helpful tips if you're just getting started.
How ironic, I was wondering just last night when 1.2 would be released.
I'm a newcomer to Elixir, and have really been enjoying it. As a Python->Rubyist, it's been really interesting to finally hit a functional language, and some of Elixir's most basic features just seem crazy in comparison to what I've come from. Some neat examples:
* Pattern Matching. In other words- you don't assign things to variables, you match things. Elixir/Erlang is just doing algebra behind the scenes. I'm sure this is a gross simplification, but it's enabled me to write some really condensed code that still makes a bunch of sense.
* Streams. I know Node developers would laugh at this being a new concept, but I hit Streams when I was doing Node, and I didn't get it. Streams in Elixir feel much more self-evident, and feel much easier to read.
* The Pipe Operator( |> ). This effectively lets you simplify code by just passing results from one thing to the next. For example(taken from the excellent "Programming Elixir" by Dave Thomas):
So there, it takes the 1..10 range, maps the squares of each element to an array, then filters the array to just the elements that are less than 40.
It's a surprisingly enjoyable language to program in, and best of all coming from a Rubyist- the performance gains are automatic, especially once you grok some of Elixir's crazier(but easy to understand) powers.
-----
The other thing I really like about it, is how friendly the community seems. I feel personally predisposed toward extremely friendly communities- Ember.js was the first community that made me feel like I had a home- and the care with which Jose Valim and the core team treat people, and the general "Give back everything you can" attitude of the community is really just inspiring.
My newest side project was something I dropped because I thought that the hardware necessary would make it not worth the effort of building it, but now I have complete confidence in it.
I encourage you to take a shot at it if you're looking for a fast, functional language with a clear syntax and easymode concurrency.
Ah, that wasn't my intention! Just, the Node community is probably the closest to the Ruby/Python crowd in terms of overlap, and what I assume might be reading a post from a developer like myself.
It's also worth mentioning that the Node community took a long time to get streams right, and "Streams 2.0" is still poorly documented.
For example, a major feature of the redesign is backpressure support, but it's not at all obvious how it's supposed to work, and there are no mechanisms available in the standard library to tweak it.
Streams also still have odd issues you would not expect in a mature release. (Error propagation, for example, is quite broken in practice; if you string together a series of pipe() operations, emitted errors won't propagate up the chain properly.)
I haven't looked very closely at Elixir's streams, but the fact that they are based on lazy function evaluation makes them inherently better than what Node.js offers.
Are list comprehensions still part of ES7? It's strange that Babel removed the list comprehension transformer recently, but I haven't been following closely.
The Elixir pipe syntax is reminiscent of Clojure's ->/->>:
This is a shorthand for writing anonymous functions. The &() activates the feature for the expression inside, which can then use &1, &2 etc to apply parameters.
For example `&(&1 + 1)` is equivalent to `fn x -> x + 1 end` .
It's the super terse anonymous function syntax. You could use the slightly more verbose version with named instead of positional params or put a named function in there instead.
We call it the "capture" operator. What it does is it captures the nth parameter of the anonymous function. Like some have already said, the number just signifies which parameter you're making a reference to in the function body. It's terse and comes in handy for ad-hoc stuff. I prefer verbosity and often write Elixir code with the standard syntax and find myself using the capture operator a lot more in the REPL.
I hear Elixir is great, but I understood and still enjoy what you described in Scala, so I don't see what makes it unique. On top of this, it runs on the JVM, so you get to use the entire history of Java libraries whenever you need it.
Your example:
>> (1 to 10).map((x) => x*x).filter(_ < 40)
Of course, Scala is an extremely complex language, but I just take the features I feel are useful and/or relevant to what I'm trying to accomplish.
Another up and coming JVM language with similar capabilities to Scala is Kotlin. If it can take off on Android, I think it will become quite popular.
With Elixir, you can use Erlang libraries(or just plain Erlang) right inside of it, very similar to what you describe with Scala!
I've never used Scala personally, and my Java programming was all school-related, so sadly I just don't have the relevant knowledge to explain the differences.
The functional aspects are not really that different. I think the key one is that Erlang has tail call optimization baked in, whereas Scala achieves something close, but not exactly there, by working around the JVM's limitations. Scala's implementation works really well when you have a function calling itself, or two functions mutually calling each other, and that basically supports recursion. But the inability of arbitrary function A -> arbitrary function B -> C -> D, etc, to be used, without risking blowing the stack, does lead to some very simple, easy to understand patterns for control to be unavailable (see Scala actors 'become' below).
From what experience I have (played with Scala, done real dev work in Erlang) -
The lack of inheritance in Erlang/Elixir is a difference. It's one I like (and from what I've seen, the more a person uses Scala the less inheritance they end up using), but others may not.
Actors behave similarly between the platforms, but have a few differences. Whereas in Scala actors (by this I mean Akka) struck me as reactive, in Erlang they struck me as proactive. What I mean is that Scala, at least when I looked at it, it seemed like you create an actor system, and then actors in it, and they do nothing until you send a message. In Erlang, you just create a new process and at any time it can pause to receive a message. That is, the 'actor' is just a thing that does work and has a mailbox, rather than this reactive interconnection of things. Fundamentally there may not be much difference, but I found the abstraction in Erlang easy to initially grok, and playing with Scala led me to some irritation and difficulty in structuring my program. That may have been just due to my own expectations rather than anything innate, but just pointing out, they are a bit different.
Scala actors also have the idea of 'become' when you want to change an actor's behavior (and in general the behavior of an actor feels more rigid to me). In Erlang, an actor is just a process running whatever code. Between that and tail call optimization, the idea of 'become' doesn't exist; if at the end of the function that is executing in the actor's process I want to perform the same action I just recurse; if I want to perform a different action I call a different function. That makes for a very simple mental model to work from; I start an actor and it immediately starts executing (though that execution might just be "wait for a message"), and it runs pretty predictably until either it gets "wait for a message", hits the end of a function (in which case it terminates cleanly), or dies. If you want it to run the same logic more than once you recurse, if you want it to run different logic you call a different function. So the 'actor' part is nothing special, no hand waving or magic, it just is a single process with a mailbox you can check, and I find that delightful to reason about; in Scala actors felt like special ~things~, that I have to configure and then set off.
Performance is a difference; while the JVM is more performant in just sheer number crunching, Erlang has a memory model that better fits actors; there is no stop the world garbage collection. In general, barring something like Azul (which I have no experience with), you'll see a smaller deviation of latencies in Erlang (useful for web servers and things, where every call taking ~5ms is better than having some calls return in 1ms, and others in 120ms).
Scala borrowed a lot of the distribution and fault tolerance mechanisms from Erlang, but I don't like them as much. They feel a bit more bolted on, and the multi-paradigm mixing means a greater degree of rigor is required to ensure you do things properly. That said, Scala seemed to provide a different level of abstractions for supervision strategies, that I didn't really delve into.
The availability of all the JVM libraries is a double edged sword for Scala; while there probably is a JVM library for whatever you want to do, it also probably behaves in ways that won't play nice with your actors, possibly forcing you to deal with multiple concurrency, distribution, and fault tolerance models simultaneously. In Erlang (in which, I might add, I've found libraries for basically everything I wanted to do, that wasn't something extremely esoteric like talk to a piece of hardware that is only used in (X) business domain...of which there was also no Java bindings, only C), unless it's calling out to external code (NIFs and the like), it will at least be informed by the concurrency model you're using (since Erlang it's actors or nothing), though there may be assumptions and expectations and such that the library makes that you will need to understand or modify.This generalizes to the language as a whole, too; in Scala it's very easy to make tradeoffs that will end up hurting you. In fact, it's practically encouraged, as the language is often sold as a way to slowly move into a more functional, safely concurrent world. The problem I have with that is without being forced to move, a lot of devs won't, and mixing paradigms gets you complexity that is usually unnecessary, and oftentimes leads to you not seeing the benefits you'd have gotten had you limited yourself to one.
Another fault tolerance difference (that won't affect you 99% of the time) is that Erlang has better process isolation. An uncaught exception in one process won't affect another one, unless they've been linked together or otherwise been entwined such that they should cause the other to crash. Scala tries to do the same, and largely succeeds, but it doesn't have the same technical underpinnings due to the nature of the JVM.
The profiling/tracing ability of the Erlang VM is better than the JVM (yes, really). The ability to attach a remote shell to poke and prod your running instance is also flat out amazing; I don't recall if Scala had anything similar. I always found Erlang apps to have a fairly small footprint on my box, whereas my experience with the JVM (though mostly using Java) saw them tending to take a fair amount of resources.
I'd stress the fact that 90% of network interacting Java code is blocking and is therefore not suitable for use with Scala's concurrency model. Worse, it will work fine in testing and small workloads, but quickly you run out of threads in your execution context and find yourself with a broken production system.
Erlang differs for two main reasons. First, BEAM has a scheduler that will prevent blocked processes from tying up OS threads. So even if a library only supports synchronous calls, it will still work without interfering with other parts of your application. Second, everything is culturally designed around OTP (the standard library for concurrent applications). Libraries tend to support async modes in ways that jive well with the rest of your codebase. There aren't really competing standards like Akka vs Finagle in Scala. Again, even if the code does not support async it will still work. Async is just an optimization (not just for performance, but the wide swath of runtime inspection tools work better on async designs).
I'm actually in the process of writing a scraper using Akka. It's quite complex at first glance but I'm hoping I'll be able to use the main features without digging too deep.
I appreciate the in depth comparison between the VMs. I guess I need to dive into Erlang eventually.
i write both scala and erlang/elixir professionally
i like the idea of scala's type system (the implementation leaves much to be desired but...) and it's definitely nicer writing process heavy tasks in scala with it's wider access to libraries and the performance benefits of types and the jvm but erlang/elixir are vastly, vastly superior for writing services and long running tasks that are mostly i/o bound
concurrency and parallelism are the core of erlang and everything is oriented around that. there are blocking apis in erlang/elixir but they are mostly implemented as wrappers around asynchronous apis. akka is a noble attempt but it's a very poor facsimile of erlang's actor model
Just want to chime in reiterating everyone's thoughts already here that this language is fun, interesting, and worth your time investment to investigate.
The biggest hurdles I had coming from OO Ruby were 1) how to handle state, since you no longer can just hang information off any arbitrary object attributes, 2) pattern matching (but now that I grok it, I love it, it's so useful and leads to more concise code), 3) lack of inheritance (although oddly, I don't seem to miss it, it just leads to a somewhat different code design), 4) OTP semantics (which after you mount the learning curve, make a lot of sense from a resiliency standpoint).
There are a number of neat little details not yet mentioned or emphasized here such as full Unicode support, full-fledged macros that give you full AST access (in a non-homoiconic language, that is a rarity!), custom sigils (http://elixir-lang.org/getting-started/sigils.html), the ability to easily call into any Erlang library, the fantastic :observer.start() utility for visually observing tons of details about a running pid hierarchy, etc.
One possible hurdle unique to languages that feature message passing between independent processes (pids or process id's) as a core feature is that once you have a pid hierarchy, it seems to me that inevitably, one of them will get a backlog of messages requiring you to either apply back-pressure techniques (slowing down upstream synchronous messaging by slowing down replies, basically) or pursue some other strategy (code refactoring etc.) when messages start to get discarded under load due to overflowing the pid inbox.
> the fantastic :observer.start() utility for visually observing tons of details about a running pid hierarchy, etc.
observer is a rather nice tool. It is, -however- not an Elixir feature, but is something that ships with Erlang. [0] As you mentioned, you can call any Erlang code in Elixir (and vice-versa!). IIRC, you do the former by prefacing the Erlang code with a ":". So, ":observer.start()" would be written in Erlang as "observer:start()" and do exactly the same thing. :)
[0] Not that you claimed that observer was an Elixir invention, mind. The whole point of this comment is to point out how easy Erlang <--> Elixir interop is. :)
I'd be curious to hear some criticism, negative experiences, downsides from people with deeper experience. This thread is 100% positivity and praise, which is highly unusual for HN (New Year's afterglow??)
To be clear, my occasional dabbling in Elixir has yet to reveal any major shortcomings so this isn't an elephant in the room kind of situation, just a genuine request from people whose thoughtful opinions I generally appreciate.
Elixir usually receives this much praise, so it's not unusual. Elixir (and Phoenix) are really great. Phoenix is probably my favorite backend framework I've ever used - and I've used a lot. And the community is wonderful.
Now, before I sound too much like Trump, there are some downsides, as always. It is a dynamic language, which can be a downside for some. (Type-checking is possible though.) It's not that fast with pure number-crunching, so it's best for distributed, networked applications. The syntax isn't always quite as nice as Ruby. And the language is relatively complex (more than Go, much less than Ruby/PHP/Scala).
From my experience the positivity is not unwarranted, Elixir is a great, young language with an awesome community (as most new languages have). The creator, Jose Valim, comes across as a very nice person and I feel that a lot of the positive attitude in the community stems from this. As to the language itself, it feels well designed and fun to use (the pipe operator is really cool). Plus it runs on the Erlang VM which is an incredible piece of engineering. Overall a great next language to pick up and experiment with, plus it has reached a good level of maturity so you won't see your code break from one release to another.
We are rewriting an internal Rails application in Elixir as a test project. I like what I see so far. Elixir/Phoenix promise it to be like Rails but 10x faster with 1000x less memory use and better concurrency. Our preliminary data is that it does deliver that.
But there are definitely problems.
The only one that really concerns me is Ecto and its integration in Phoenix. It makes simple things hard and hard things impossible.
More generally, I don't get the feeling that Phoenix was "extracted from a production web app" like Rails was. With Rails you knew there was at least one app, Basecamp, that worked on top of it. With Phoenix I am not so sure. This is a very preliminary opinion, but first impressions matter.
This specifically applies to Ecto and it's Phoenix integration. The rest of Phoenix seems perfectly nice, and fixes a lot of Rails' warts.
The rest of downsides are not a big deal, and time will fix them:
There is no installed base to speak of. You'll be the first one to run into many problems. 3rd part libraries are not there/not mature. It's missing a lot of basic scripting language functionality (e.g. wrappers over libc functions). Some code comes out verbose and hides the intent (though most of it is surprisingly nice, often as good or better than Ruby).
I would love to hear more about the Ecto/Phoenix integration and what feels hard and what feels impossible. Feel free to shot me an e-mail or ping me on IRC.
I don't think the point of Phoenix is to be like Rails. It always struck me as being aligned more toward something like Sinatra. ?
One thing Elixir definitely needs is a killer app or framework that really plays to the strengths of its Erlang underpinnings. A web framework is probably not that app/framework.
Those already exist with extreme maturity in other ecosystems. Scaling the web framework part is pretty easy since, unless you royally screwed up your design, you should be able to just add more stateless web heads to service more requests and you almost always should end up ultimately bottlenecked at the persistence layer, not in the request router/processor.
I just don't know what that killer app/framework will be.
Phoenix is much closer to Rails in terms of its goals than it is to Sinatra. It's a full-featured framework with asset handling, a recommended file layout, template discovery and compilation, generators, etc.
As far as I can tell, most projects one might use Sinatra for are done in Elixir by just working with Plug directly (Plug being the Elixir analogue to Ruby's Rack). I'm sure there are many lightweight web frameworks for Elixir out there, but I haven't seen any with Sinatra's level of ubiquity.
I think a lot of it is Elixir's niche, too. Unlike most of the languages talked about on HN, the BEAM makes no bones about what considerations it's made, and what sorts of problems it is and is not intended for.
While there are a few just general warts to be found, they tend to either be obviously surface level things ("I don't like ~this~ bit of syntax"), or so deep you are unlikely to hit them (the issue with large binaries being stored off heap, and reference counted, and references only being collected when an individual process heap collects, leading to certain edge cases where large binaries remain uncollected even when they should be collected, leading to a memory leak that can eventually OOM you, for instance. See https://blog.heroku.com/archives/2013/11/7/logplex-down-the-... ).
But aside from that, basically everything is intentionally considered and sensible from a "fault tolerance" perspective. And that makes it hard to criticize the language/platform; for a general purpose language, anything that doesn't jibe with your use case you can complain about. For a biased language like this, intending to solve a specific kind of problem, any decisions the language makes that don't conform to your problem domain are issues with you trying to use it for your problem domain, rather than issues with the language.
Example: The BEAM isn't fast at pure number crunching; rather than complaining about how slow it is, anyone looking at a problem that requires a lot of number crunching will, rightfully, say "This is not the tool for the job".
So for those problems where it -is- the right tool for the job, the cohesion between tool and problem is excellent, because the language doesn't try to be all things to all people.
You could do a lot of fast number crunching if you wanted to. It just gets tricky to implement without screwing with the primary scheduler pool.
I've written many NIFs for Erlang that call out to high-performance C routines when "number crunching" is most important. I've also done it to side-step memory allocator churn for implementing binary protocols that support zero-copy semantics like Capnp by using the NIF as a kind of escape hatch where I can fiddle bits in a buffer directly.
In fact, somewhat lazily since it's low priority, I'm trying to create a decent Rust wrapper for the NIF interface so that I can attempt to get more compile time safety out of the dangerous stuff I do outside BEAM's safety net.
Other than being dangerous, since a segfault will bring down the whole VM, the other big issue is these calls block the scheduler (basically breaking the VM's preemptive abilities and creating the same limitations Akka on the JVM has when you block all the threads in an ActorSystem). However, now that R18 has dirty-scheduler support it's much easier to avoid this problem by quarantining that stuff on threads that don't muck with executing normal bytecode.
You're not using Erlang/Elixir for number crunching; you're using it for orchestration of your C code.
That's my point; in a niche language, you only try to use it to solve the problems it claims or you have reason to believe it to be good at, thus, its limitations (both declared and any you run into outside of the claims it makes) are dismissed with "you're using the wrong tool", a fact the developer usually realizes themselves, and does not treat as a deficiency of the language.
If you had reason to believe Erlang was good at number crunching, and wrote code to crunch numbers in it, and then found it wasn't, you'd complain about how slow it is. Because you never had that expectation, because Erlang flat out says it's not good at that, you knew to instead write NIFs. The thought "this language should be faster" never entered your mind, because that kind of performance is not the goal of the language. Instead you used it where its reliability and scalability come into play, things it is good at, and offloaded the number crunching to a language better suited for it.
That's a fair point. Though the thought, "this language should be faster" has crossed my mind many times during the pain of all that other mucking about I had to do :-)
I'm timidly hopeful that projects like ErLLVM and BEAMJIT will eventually produce enough improvement to BEAM performance for computational workloads that using FFI escape hatches only becomes necessary in the most extreme fringe of circumstances.
The fact that the NIF interface is so damned straight forward to work with compared to FFI implementations in other systems does ease the pain a bit of having to step out to consume it more often than I might its analogue in other languages.
I write both Erlang and Elixir. One negative thing I'll say about Elixir is that it sometimes feels like a Ruby-shaped DSL obfuscating the Erlang I would otherwise be writing more clearly.
I don't like optional syntax. Like dropping parens for argument lists, etc. Elixir adapted this quality of having varieties of optional syntax from Ruby-land, and I think it makes things harder to read and writing is sometimes ambiguous. So for my part I just avoid it and write out the explicit form every time.
I also think the weird variable rebinding doesn't really solve an interesting problem, but also ends up sometimes making things more confusing when reading code in projects written by people who leverage it all the time. I happen to like explicitly naming reused constructs with different names each time they're bound. 1) it makes is explicit which version of a thing you're working with (e.g. Time1, Time2, etc.), and 2) it makes writing the code a lot more similar to how one might sketch up a more formal construction (e.g. T, T', etc.). So here again I just write out the explicit form like I would have in Erlang. To me the very existence of the "pin operator" is kind of an indicator that rebinding was a weird design choice to solve what seems like a non-problem.
Lastly there's a fair bit of boilerplate in Erlang when using the OTP patterns, and in Elixir there are often metaprogramming constructs that attempt to provide some sugar or some shortcuts to mitigate that, but I find this is really hard for newcomers when trying to understand how something like say Supervisors work, because in Erlang everything that's going on is explicit and well documented, but in Elixir the connections and relationships between concepts, callbacks, etc. isn't quite as clearly spelled out and the docs don't really help much (in fact they make it more confusing because at the same time they introduce Supervisors they also introduce GenServers, and give the false impression those concepts are tightly coupled). I often point learners at the Erlang docs and give some instruction around OTP concepts in Erlang so that they have an easier time mapping them in Elixir.
That said there are a ton of things I really like about Elixir. Starting with Mix. Oh my god how I love Mix. Having the language come bundled with an easy to use build, dependency, and release management tool is unbelievably nice. I also really like the fact that a lot of effort went into organizing the standard library so it's coherent. For the most part things are pretty discoverable. Functions that take state or context to modify pretty much always have it passed in in the same location in the argument list. Whereas in Erlang it's different depending on which module you're in, and sometimes not even consistent inside that same module. Also it's nearly impossible to know where to go look for a particular piece of functionality in Erlang due to the way the organization of the standard library accreted over many many years.
I've really enjoyed adding Elixir to my stable of tools, and there are a couple projects at work will definitely be made significantly richer by the fact that they're built in Elixir.
Oh, and I also really miss that variables in Erlang all begin with a Capital letter. If Erlang had Elixir's atom syntax, or Elixir had Erlangs variable syntax, I'd be in code heaven.
Then functions, variables, and atoms would all be trivially easy to identify at a glance.
Wow, already home brewed too! (OSX users can upgrade with `brew update && brew upgrade elixir` and it just magically works -- because awesome people have made it so)
What a great way to ring the new year!
Happy New Year everyone!
This is the best New Years gift I could hope for! :) I've been using and slowly switching towards Elixir for last few months. It is fun, refreshing, enjoyable to work with and community is very welcoming and pleasure to be part of.
There are two really great additions to Elixir 1.2: multi-alias, and matching variables in map keys.
Previously I found myself wanting to alias a lot of things, like:
alias Foo.User
alias Foo.Email
alias Foo.Location
Now with multi-aliasing I can do it all in one line:
alias Foo.{User, Email, Location}
That's just a cool syntactic sugar kind of thing that maybe saves a few lines at the top of the file. But the map key matching is great, and something that I've frequently missed up until now.
In today's Who's Hiring thread, there is only one mention of "elixir". I quite like that as Elixir still has the aura of something new for its own sake with the type of people that this attracts.
My company will be hiring soon and is using Elixir for backend services. Knowing Elixir is just a plus and any competent engineer can pick it up in a week.
However, finding Elixir jobs is harder if you're the seeker... I agree.
Almost, you definitely have a nice REPL however I find the workflow of Clojure way better.
The problem I think lays on the fact that Clojure group function inside very flexible namespace, while Elixir use more rigid modules.
However in my limited experience the way you code Elixir feels very different from the way you code Clojure.
It's difficult to explain, but I would say that I follow more my gut while I code Clojure and more reflexive while I code Elixir, the code in Clojure support concurrency and parallelism, while in elixir concurrency is the norm... Not sure if I was able to explain it...
> If you are unsure if a post is readable, why not just rewrite it?
One can grapple with several ways to express something, and be unsure which one is the best. Sometimes just publishing is better than being forever stuck in an indecision loop. :)
The questions that arise from the confusion can tell you how to better structure your words.
nope, I feel that concurrency is the norm in Elixir, while it is supported in Clojure, let me explain better.
In Clojure you have an idea on which order what function is called on what data ie. you read your data from IO, you clean it, you analyze it and finally return the result in IO.
This code is sequential, granted you can ask the first batch of data to the IO and while you wait for the second batch start to analyzes the data you already got (concurrency) or/and spawn a lot of map-reduce jobs to use all your processors to analyze the data (parallelism)
In Elixir you define a lot of independent process (you can see a process as data plus functions that act on that data itself) and each process run concurrently with the other, this create a lot of new problems but is also very powerful.
> If you are unsure if a post is readable, why not just rewrite it?
Actually I was in the car while I wrote the message, this is the first chance I got back on my computer.
You are the only one who felt to comment on the post asking for more clarification, it means that either people don't care or that people did understand what I was saying, I am just glad that you asked so I can clarify myself and explain better ;)
Was very happy to see this recent post from Pinterest [1] as a sign that some of the more "mainstream" teams are starting to look at it.
Also, a repost [2] of some helpful tips if you're just getting started.
[1] https://engineering.pinterest.com/blog/introducing-new-open-...
[2] https://news.ycombinator.com/item?id=10278870