Hacker News new | past | comments | ask | show | jobs | submit login

Gui is not that hard, there are couples of examples out there. After all it is just dealing with some binaries.

For GPGPU, video/audio codecs and all ... You want single threaded things. And there Erlang says "i'm ok with that" and let you use Native Implemented Functions (NIFs) that enable you to use Rust, C and co. Or to use what is called a Port and let you interface with a program outside of erlang in a safe way. Works quite well.

Then Erlang is used as a "concurrent and distributed glue" between your programs. There are a couple of scientific simulations that use erlang like that in the wild.




Is there some reason Erlang had to be slow for single threaded things? Mixed language programming isn't desirable for many of us. I suspect Erlang could be fast, with just a few changes, but it never will be.


Some.

1) it was never seen as a problem because the erlang team believe (and i agree with them) that you best let specialists deal with their speciality are.

2) There were never an efficient JIT on the BEAM. This is something that is regularly looked at by the OTP team, but it would be a massive effort, just to try to approach effiency of what other language, that you can easily interact with, could do.

3) The use case are small. Let be honest. The amount of problems that are really only CPU bounded are limited in day to day work.


My last project saturated a cluster of computers for days at a time. Not everyone writes crud screens for day to day work, honest.

Parallelism is about performance, and it seems contradictory to do parallelism well on top of a slow language. Yes, Erlang can call out to other languages, but so can shell scripts.


Erlang was built as a concurrent language for fault tolerance, not performance. The best example of a trade-off I can think of is the preemptive scheduling: Erlang switches between processes very often, which has a cost in performance but ensure the system stays responsive even if some tasks are doing intensive work (or even stuck in an infinite loop). Every function call, allocation, etc. increments a counter which is used to decide when a process is scheduled out, so that's significant overhead you pay to get a more predictable system.

Some other examples are live code loading, which means you can upgrade a system without restarting or interrupting it, the extensive tracing and statistics, etc. All these little things make it very nice to build and maintain a long-running service, but add up to make the VM slower. If you're doing batch jobs and have no realtime requirement, then the trade-off may not be so attractive.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: