The two major things that I stumbled on when I first started using Pony was that there is no blocking constructs in the language and dealing with capabilities.
I've had exposure to memory safe languages using types before (ATS, Rust) so the Pony system of capabilities wasn't too difficult to understand once I was able to map them to things I was familiar with. Before that I struggled to understand why things weren't compiling. Now that things have somewhat clicked I'm pretty productive with the language.
Dealing with writing non-blocking code was a matter of thinking in terms of callbacks and promises - something familiar to a lot of Node programmers I suspect.
Once I got past those two things I find the language very nice to write in and easy to reason about the code. Sometimes I hit walls of struggling to work out why things aren't compiling and they eventually end up being the compiler showing me what I was trying to do would be unsafe due to data races, etc.
The language is still young though so definitely something to approach with a bit of understanding that things may break.
In addition to being a member of the core team, I work at a company that uses Pony and working with it every day has been a pleasure. There are plenty of rough edges to work out but the compiler has stopped me from introducing subtle data-races into our code base on more than one occasion (more like at least once a week, if not more).
We know the value of constraining reference capabilities from Rust (`mut` annotations etc). Now imagine you could annotate these references even further, describing how they specifically operate wrt concurrency. To me, that is where Pony shines. We'll see more of this in other languages.
Most message passing on BEAM is done via copying. Pony can share the memory directly because the compiler enforces safe sharing via deny capabilities. You can only send 3 basic types to another actor:
iso -> mutable memory that you have the sole reference to
val -> immutable reference. you can read but you can't write
tag -> an opaque reference. good for sending messages to actors or doing identity comparisons
The type system certainly helps as well. It allows us to give LLVM hints on optimizations it can make. The type system also helps because we can do "dangerous but fast" things safely because we can prove they are safe in this instance.
Compiling to native code via LLVM is another area for performance wins.
That said, Erlang has over 20 years of rock solid production usage behind it and that unto itself is a quite a selling point. At this point in time, I'd suggest Pony to Erlang/Elixir users if they need more performance, otherwise, I'd stick with Erlang for now.
> The type system also helps because we can do "dangerous but fast"
> things safely because we can prove they are safe in this instance.
This is an area of some static type systems that I'm really interested in; it feels counter-intuitive at first. You'd imagine that flexibility is what lets you do the things you need to do to go fast, but in many cases, restrictions actually are. Cool stuff.
Absolutely. In modern JavaScript engines, for example, dynamic typing makes something as basic as |foo.bar = 7| extremely complicated internally. In SpiderMonkey, |foo| could be a native object with |bar| in some varying location on a fixed or dynamic slot, or it could involve a proxy, a setter, unboxed or unboxed expando object, DOM or cross-compartment wrapper... To make the whole thing efficient, a particular property access could go through engine code, specialized JIT-generated native code after enough type information has been collected, or one of three inline cache systems, which generate multiple native code stubs |switch|ed on checks based on previous (slower) executions. A given get or set could even pass through more than one of the above, if too many checks fail and bailouts are required. And |bar| could be located directly on |foo| or on some object in |foo|'s prototype chain, requiring on-the-fly verification of additional invariants to ensure correctness.
Static typing would mean the engine can know for sure what |foo| is and where to look for |bar|, allowing faster, guaranteed-correct code to be emitted ahead of time. Dynamic typing makes it harder to offer speed, correctness, security, and good memory usage all at once.
If you think that's interesting, you might want to check out ATS¹ and Mercury². ATS is wicked fast and doesn't even do some of the optimizations it's theoretically capable of (I think its alias analysis is fairly primitive). It compiles to C, but can use type information to remove bounds checks in many cases. Linear types mean memory and concurrency safety with no runtime overhead. (You're on the Rust team, right? So I suppose you're familiar with linear types—ATS's are much more powerful than Rust's affine types though.)
Mercury has uniqueness types, so can be remain referentially transparent while compiling to code that mutates. The compiler has fairly advanced automatic parallelization and can in some cases do compile-time garbage collection (i.e. it knows at compile time when an object will become inaccessible).
The great part about ATS that I wish Rust had is that you can define linear types for C libraries, and in general the type system is strong enough that you don't need unsafe{} sections.
You can do exactly the same thing in Rust, just not in the same statement as importing the functions (which are just that, importing the functions). I regard this as one of the most powerful parts of Rust: wrapping unsafe code/APIs into safe interfaces without cost.
Also, I think saying that the ATS has no unsafe{} sections is misleading: it isn't explicitly marked in the source, but the compiler still cannot check the "ownership" annotations in the imports are correct, or that, say, the preconditions of the functions (which may lead to undefined behaviour when violated) are satisfied. In other words, all of that code is implicitly surrounded in an `unsafe` block.
(The linearity is essentially handled by destructors: the common case is the clean-up is just that, clean-up, and so destructors work well. It is definitely more annoying to 100%-type-check APIs that have more interesting clean-up/closing procedures but these are rarer.)
I think people tend to think that C lets you go fast because of the tricks it lets you get away with and how “close to the metal” you are. Which is partially correct, but C is also an obnoxiously hard language to optimize because of the flexibility.
pony is compiled to native code, and to highly optimized code.
beam is just interpreted. even if most of the time the code has to wait, it has to wait much less. that's why pony can beat C++ with OpenMP in comparable tasks.
It's garbage collector is superior, having to do much less work than in beam.
The data workload for each thread is much smaller. objects are tiny, messages are mostly referenced (shared) and not copied.
erlang does more. it already supports distributed actors, so there's a little overhead also.
One nice thing about the Pony GC is it can GC actors that are waiting for messages but will never receive one. In some other languages I've used I end up with processes idling in a receive on channel living forever but never able to exit because nothing is going to put a message in the channel.
Pony represents new thinking for concurrent design. I would not use Pony in production yet, but it is worth a look if you want to explore novel approaches for concurrency.
There's a nice fit between Fintech concerns and Pony, that said, Pony is quite usable across a variety of domains. That said, perhaps we gave that impression. Hopefully folks outside of Fintech will be interested in Pony as well.
> One of the first things we did was to introduce a Code of Conduct to put in place rules for the community we sought to grow. A welcoming, thriving, civil community is what will make Pony a success. The more welcoming that community, the more people we will have driving Pony forward.
I honestly find a Code of Conduct unwelcoming, as it indicates that it's very likely that what I write will be attacked due to my identity, and that I may at any time be written out of the community in a modern version of damnatio memoriæ, even if I maintain a strict wall between my personal and professional lives (witness the people who attempted to shut down LambaConf because they disagree with Curtis Yarvin).
A Code of Conduct longer than 'be professional' is a signal that the community which professes it is hostile. They are, of course, free to be hostile; I am free to choose not to join them.
I think it is important to note that LambdaConf 2016 was successful. In organizers' own words, "It just means there is a space in this world for an indie conference that rigorously ensures professional conduct at the event, but leaves other matters at the door."
People who protested against LambdaConf 2016 organized alternative confrence MoonConf at the same place at the same time. Two conferences had cordial relationship. Nothing people worried about actually happened.
Don't be an asshole to sexual or racial minorities.
Be a decent human being. Don't harass people.
If someone's identity is predicated on being an asshole, it will drive non-assholes out. I'm glad that assholes will self-select themselves out of such situations.
But perhaps I am prejudiced. I prefer the company of non-assholes.
I have no idea why they did put this COC policy there, but no discussion so far went into a hostile direction, even on grave technical opposition. Rather the opposite.
I am really interested in Pony. There are some exciting things about it. I'm hoping that bit syntax support from Erlang is implemented, possibly with support for recurring structures. Although, I wouldn't say that is strictly necessary if you can do some nice recursion on binaries.