Hacker News new | past | comments | ask | show | jobs | submit login

You'll feel instantly at home in phoenix, since you know more or less what MVC, migrations, ... are about.

Then you'll dig deeper into und the actual syntax (a week for rubyists before it clicks?). Dont give up, its awesome.

Then you'll be productive right away. And then you'll discover what OTP actually means in practice and be blown away. (Point of no Return)

And then you fully "get" what channels, distributed systems, umbrella-refactorings and all the latest talks are about.

Phoenix as a whole is so advanced compared with everything else we have currently its not even funny. I highly recommend the official book, it itself is one of the best books in web dev i've ever read.




> And then you fully "get" what channels, distributed systems, umbrella-refactorings and all the latest talks are about.

I used Erlang on my last project, and it was an awesome fit for what we were doing. It's a really great system.

That said... "don't drink too much kool-aid" - Rails is fine for lots of things, and continues to be fine. Not everyone needs a fault-tolerant, distributed system, or web sockets or some of the other stuff where Erlang is a win.


I agree with the "don't drink too much kool-aid" but you can also completely develop and deploy Phoenix without worrying about distributed systems, umbrella-refactorings, and so on. At least, I have. :)

This has been the whole appeal behind Phoenix: it truly is as productive as Rails/Django but much more performant. Better throughput, faster boot times, faster tests, etc. Fault-tolerance, channels and presence are the cherry on top.

I have seen Chris keynote about the presence implementation and they seem to have used really advanced techniques that I cannot not quite follow. Still, this week we are going to deploy presence into production and all we really care is that it just works (and without Redis or any other dependency).


> it truly is as productive as Rails/Django but much more performant

I'd be a little bit skeptical of both things: Rails has a huge, mature ecosystem, and Erlang is often faster than Ruby, but is not a "fast" language.

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...

Where Erlang really shines is that it's running in a single Unix process, yet can handle multiple things going on at once thanks to its scheduler (and not have ugly code, because it's the scheduler doing the work, not the programmer writing spaghetti). This means that generally, it's going to handle more connections better and degrade more gracefully. This is a huge win for web sockets where you have a connection sitting open and not tying up a huge Rails process. Although even there, I guess the Rails guys are working on some new stuff that should help alleviate that problem.

Erlang is cool stuff, and well worth a look. I certainly enjoy working with it. But be wary of throwing out Rails because "new! shiny!"


Jason's coworker here.

The Erlang benefits you have mentioned above also fully apply to regular web requests and applications. One of Ruby biggest flaws (and consequently Rails') is poor support for concurrency. And Phoenix performs well on all aspects Jason mentioned because running tests, serving data, etc is all done concurrently.

I was also skeptical at first. I've heard talks and benchmarks reporting Phoenix is from 10x to 20x faster than Rails (like this one: https://gist.github.com/omnibs/e5e72b31e6bd25caf39a). After porting our authentication service to Phoenix (our first), we saw similar gains with less CPU and memory usage (as well as better response time averages and 99 percentiles). Our deployment infrastructure for this service is now one fourth of what it was originally.

Other than that, Rails definitely has a huge ecosystem and that should be taken into account by those planning to move to Phoenix. Honestly, it has not impacted us in any significant way but YMMV.


Very true, but what I particularly like is the raw speed- having a JSON API with an average response time in microseconds(ie, sub-millisecond) is super cool and goes a long way in transforming a web app into that native-app feel.

To be able to get that without sacrificing productivity is huge.


How are you accomplishing that? Most databases take at least that long to get data back to your application.


Not necessarily. If the data is in the database's cache it can be pretty fast. I just tried `explain analyze select email from users where id = 10` on my local postgres, and while the first query took 20ms, the second one only took 0.065ms.


Not every API needs to hit a database for every call. This is especially true for Elixir/Phoenix because if you don't need your data to be persistent over a server failure, you can just store it in an OTP process/ETS table and keep it in memory.


It's also worth pointing out, the newest Ecto(released this week) has features that can do multiple database calls at once, since each one is put into its own connection pool and then assembled at the end. So even rather heavy calls get effectively zeroed-out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: