Hacker News new | past | comments | ask | show | jobs | submit login
Crystal: Fast as C, Slick as Ruby (codeship.com)
544 points by yranadive on Aug 4, 2016 | hide | past | favorite | 429 comments



From this post, Crystal appears to have some of the things many people have been lusting after in Rust: sophisticated metaprogramming, fewer sigils, a bigger standard library, fibers/coroutines/whatever-they're-called-now.

But it still has a GC :(. Rust has completely spoiled me with making it easy to minimize dynamic memory allocation and copies, and to know (almost always) deterministically when something will go away.

EDIT: I should also say that if you want to bash on Rust's lack of these things, 3 out of the 4 items I cited have solutions being actively worked on (either at planning, RFC, or implementation phase). I don't think Rust's sigils are going away any time soon, but I have no idea how you'd do that and preserve semantics anyway.


Having worked in environments where GC was an absolute "no go", I'm always amazed that so many people have problems with a GC. Yes there are types of software where using a GC'd language would probably be a bad thing. If you're talking about huge projects with heavy performance constraints (os kernels, AAA games and browsers come to mind), I would probably try to avoid it.

But most likely - you simply do not need a language without a GC. If you look at the sheer amount of applications written in interpreted languages, anything compiled straight to machine code is a win, even with a GC. The interpreter and runtime overhead is so much bigger that a GC does not really matter in them, unless you're talking about highly tuned precompiled bytecode that is JIT'ed like Java and .NET, or natively compiled languages like Crystal and Go. So yes, when compiling to native code, the GC can become the "next" bottleneck - but only after you just removed/avoided the biggest-one. And that 'next' bottleneck is something most applications will never encounter. I initially thought of mentioning database engines in the above list of "huge projects with heavy performance constraints", but then I realized a good number of specialized databases actually use runtimes with a GC. Hadoop stack with especially Cassandra, Elasticsearch? Java. Prometheus and InfluxDB? Go.

Just face it: there is an need for something intermediate to fill the gap of a script-like, native compiled, low-overhead, modern language, and a GC is part of this. The popularity and "I want to be cool so I hate it" trend of Go proves this, but the devops space is getting new useful cool toys at a breakneck speed, pretty much exclusively written in Go.

So I really don't get the whole GC hate. If you don't want GC, there are already many options out-there, with Rust being the latest cool boy in town. But in reality there are huge opportunities and fields of applications for languages like Crystal and Go. And most likely - you could use such a language, only you don't think you do because you have an "oh no, a GC!" knee-jerk reaction.


> But most likely - you simply do not need a language without a GC.

Absolutely. That doesn't mean I can't want predictable performance or deterministic destruction. I also think it's a shame that we waste so much electricity and rare earth minerals on keeping ourselves from screwing up (i.e. on the overhead of managed runtimes and GCs). Before, I'd have argued that it was just necessary. Having spent a bunch of time with Rust, I don't think so any more, and I'm really excited to see non-GC languages build on Rust's ideas in the future.

> Hadoop stack with especially Cassandra, Elasticsearch? Java. Prometheus and InfluxDB? Go.

Cassandra has a drop-in-ish C++ replacement (Scylla, IIRC?) which supposedly blows the Java implementation away in performance. A magic JIT (and HotSpot is really magic) doesn't make everything better all of a sudden.

In a somewhat recent panel (https://www.infoq.com/presentations/c-rust-go), the CEO of InfluxDB basically admitted that if Rust had been more stable when they started they would have been able to use it instead of Go and would have had to do far fewer shenanigans to avoid the GC penalty.

> Just face it: there is an need for something intermediate to fill the gap of a script-like, native compiled, low-overhead, modern language, and a GC is part of this.

Indeed. I'm not in denial of this. I made an offhand remark about my personal preferences and what I'd like to see from future languages. I still write a ton of Python for things where speed really doesn't matter.

> "oh no, a GC!" knee-jerk reaction

I don't think having a refreshing experience without a GC counts as a "knee-jerk reaction." I've thoroughly enjoyed not having to tune that aspect of performance, and I remarked on it. I think Crystal shows great promise, and certainly has the potential to offer easier ergonomics than Rust.


> That doesn't mean I can't want predictable performance or deterministic destruction.

Exactly. To just add to your point, there is no longer a reason to settle for GC pauses with Rust. It does require more thought while writing the code, but what you gain is a firmly consistent runtime. If your memory allocation is slow, you can create your own allocator/slab, and then use that for hot memory space and optimize it out.

As a longtime Java geek who never understood the argument against GC, this has been a mind altering experience. I was a big C++ person before, but after one too many memory leaks and segfaults, I could never imagine not wanting a GC. Then Rust came along and taught me better.


Two reasons:

1) Rust appears to be significantly less productive than a true GCd language. I see a lot of people talking about "fighting the borrow checker" with Rust and I see a lot of articles describing basic patterns that would be simple in any other language, but are complex in Rust.

2) If you want to invoke code that assumes a GC you need to have one.

You can do manual memory allocation in Java by the way and a few high performance libraries do. It's just not common.

It's interesting to note that the Chrome guys have gone in the direction of deploying a GC into C++ whereas the Mozilla guys have gone in the direction of moving manual memory management into the type system. I've got nothing against Rust but I'm a Chrome user, personally.


> fighting the borrow checker

My experience is that this is an initial hurdle to clear. As an example, I've almost exclusively worked in GC'd languages for a while, and after learning Rust for a few months I very very rarely have borrow check errors.

The fact that it occasionally requires a complex pattern to do right should get better with time (non-lexical lifetimes would help), and there's also discussion around GC integration so that you could interact with a scripting language GC when writing a plugin for it, or you could farm out GC'd objects when you need to have cycles (i.e. in graph algorithms).

> the Chrome guys have gone in the direction of deploying a GC into C++

Interesting. I'm curious how much of the browser relies on it. I'm also curious whether it's an attempt to paper over C++ with a little memory safety, or whether it actually offers performance improvements. My original point was not that GC is bad, per se, but that I quite like being able to avoid it when it's reliable to do so, which is not the case in C++, IMO.


Mozilla has also put significant effort into improving their C++ GC in Firefox, e.g. switching from a non-generational one to a generation GC[0] and then to a compacting one[1]. Just like Google can both improve Chrome and work on Go, Mozilla both improves Firefox and works on Rust.

[0]: https://hacks.mozilla.org/2014/09/generational-garbage-colle...

[1]: https://hacks.mozilla.org/2015/07/compacting-garbage-collect...


Those are all about the JavaScript GC, right? Not using the GC for actual pure C++ objects.


I wasn't giving examples of Mozilla doing exactly what Chrome is doing, just counterexamples to your implication that working on Rust means no work on improving memory management in Firefox.

That said, it's not even like a single-GC approach is incompatible with Rust: https://blog.mozilla.org/research/2014/08/26/javascript-serv...


  > I see a lot of people talking about "fighting the borrow checker" with Rust
It's also usually described as "at first, I fought the borrow checker, but then I internalized its rules and it's now second nature." You're not wrong that it's a hump to get over, but once you do, it's not a big deal.

  > that would be simple in any other language, 
Any other _GC'd_ language. You still fight the same kinds of complexity when you don't have GC.


The C++ advantage will not be so much after Java 10 comes out and finally have the value types and reified generics the language should have had since beginning.

Also I am yet to see any large scale production deployment of those Hadoop alternatives.

But it might still be like 5 years from now, so who knows how it will evolve.


Re: deployments, I expect that would take some time for a transition to occur. The first post date on the ScyllaDB blog is from February 2015 (http://www.scylladb.com/2015/02/20/seastar/), and it looks like it wasn't until September 2015 that they specifically started publishing benchmarks of the database itself as opposed to the network I/O library they built for it (http://www.scylladb.com/2015/09/22/watching_scylla_serve_1m/).

I look forward to those changes coming to Java, and I think that stack-based value types could do a lot for the language. That said, the Scylla folks seem to have gotten a lot of their performance gains from CPU/thread affinity and async I/O (http://www.scylladb.com/2016/03/18/generalist-engineer-cassa...). NIO is pretty great in Java-land, IIRC, but CPU/thread affinity is, I imagine, hard to pull off with a garbage collector.

Another thing I'm curious about w.r.t. value types in Java -- hasn't C# had those for a while? If so, and if your claim that value types will provide large performance benefits is true, why isn't C# always blowing Java away in benchmarks? Perhaps it is and I'm just not seeing them. Perhaps Java's escape analysis is already pretty good and solve the 60/70/80% case? Perhaps I'm not well versed enough in the subject to understand the interactions here.


Regarding C#, Microsoft hasn't invested too much on their JIT/AOT compilers optimization algorithms.

NGEN was just good enough for allowing quick application startup.

Also they didn't invest too much in optimizations in the old JIT.

Specially since .NET always had good interop to native code via C++/CLI, P/Invoke and RCW.

There were some improvements like multicore JIT in .NET 4.0 and PGO support in .NET 4.5, but not much in terms of optimization algorithms.

Hence why .NET 4.6 got a new revamped JIT called RyuJIT with SIMD support and lots of nice optimizations.

But this is only for the desktop.

.NET for the Windows Store is AOT compiled with the same backend that Visual C++ uses. In the Windows 8 and 8.1 they came up with MDIL from Singularity/Midori but with 10 they improved the workflow to what is nowadays known as .NET Native.

With the ongoing refactorings they plan to make C2 (Visual C++ backend) a kind of LLVM for their languages, similar to the Phoenix MSR project they did a few years ago.

If you watch the Build 2015 and 2016 talks, most of them are making use of C# with the new WinRT (COM based) APIs, leaving C++ just for the DX related ones.

So they are quite serious about taking their learnings from project Midori and improve the overall .NET performance.


From what I've seen, stack based value types are not necessarily the big performance win they're touted to be. The rule of thumb I've noticed is that, if the struct is much bigger than the size of a pointer, you start seeing a pattern where it's quicker to allocate in the first place but slower to pass around.

I think this is because, on a platform like Java or .NET that uses generational garbage collection, the heap starts to behave like a stack in a lot of ways. Allocations are fast, since you just put objects at the top of the heap. And then, since they're at the top of the heap, they tend to stay in the cache where access is fast, so pointer chasing doesn't end up being such a big deal. On the other hand, if you use a struct, every time you pass or return it you end up creating a shallow copy of the data structure instead just passing a single pointer.

(Disclaimer: preceding comment is very speculative.)


Kind of true, but you can minimize copies if the language supports ref types, which was already common in languages like Modula-3, D or even Eiffel.

Also a reason why C# 7 is getting them as return types in addition to ref/out parameters.


> value types and reified generics

I think you mean specialized generics (i.e. no autoboxing of primitives when used in generics)? Reified generics implies carrying around all generic type information at runtime, which will not be the case and also has nothing to do with performance. Non-value generics will still be erased I thought.


Have you seen the status update?

https://www.youtube.com/watch?v=Tc9vs_HFHVo&list=PLX8CzqL3Ar...

They will change the constant pool to have some kind of template information that gets specialized (what they call type species) into a specific set of types.

The plan is even if Java cannot fully take advantage of all possibilities due to backwards compatibility with existing libraries in binary format, the JVM will support it for other languages not tied to Java semantics and backwards compatibility.


Java 10? I'm still waiting for Jigsaw (originally slated for 1.8, is it coming in 1.9???)



And Jigsaw has landed, build 116 if I recall (too lazy to look it up)


That doesn't mean I can't want predictable performance or deterministic destruction.

You're assuming your compiler or operating system won't cause memory to be freed at different times.

I also think it's a shame that we waste so much electricity and rare earth minerals on keeping ourselves from screwing up.

Wasting man hours on manufactured problems is far worse than wasting coal.


Memory isn't the only resource managed by deterministic destruction.

Manufactured problems? When what's now coastline is underwater, I'll be glad to see if you remain as smug.


GCs are complex and require lots of end-user tuning -- just look at the performance articles on Java.

Beyond that, however, there are many uses for ownership beyond controlling memory resources. Closing a TCP connection, releasing a OpenGL texture...there are lots of applications of having life cycles built in to the code rather than the runtime.

EDIT: fixed typo


just look at hand the performance articles on Java.

Just look at the hand performance articles on C... People talk about it because you can do it, not because you have to do it.


And you can do it because someone found that it was necessary to do.

It's crazy I can't tell Java and NodeJS "use the memory you need". Instead I have to specify max memory sizes (and then watch as they inevitably consume all of it).


Of course they will consume all of it. That's how GCs typically work. They won't invoke a collection until there's no space left to allocate. Just because they "use all of it" doesn't mean all of that memory is actually live. It just hasn't done a collection yet.


> Of course they will consume all of it. That's how GCs typically work.

Perfect agreement.


The disagreement probably lies in your characterization of the GC behavior as "crazy" when it merely doesn't suit your tastes.


> It's crazy I can't tell Java and NodeJS "use the memory you need".

Define the 'memory you need'? You know the computer doesn't have a cristal ball to know what latency vs memory usage trade off you want..


"Until the OS refuses to give you more"

Just like every compiled, non-GC program gets. It's annoying af to fiddle with interpreter/VM "maximum heap sizes".


Alas, "until the OS refuses to give you more" is also not a meaningful signal anymore.

https://en.wikipedia.org/wiki/Memory_overcommitment


Tracing garbage collectors have to have some metric for when to trigger a stop-the-world collection. IIRC, many GCs track heap usage and trigger stop-the-world when it gets above a threshold. GC'd language runtimes which don't have a tracing/compacting collector (e.g. CPython with refcounting) don't need to configure a heap size because the behavior doesn't vary as you use up your heap.


That's still no good reason to throw all kinds of "Out Of Memory" errors when you're hitting 1GB out of the 16GB of physical memory on my system.


That's what most GCs do set the default max heap size to, assuming you don't have a swap file.


I wonder, could we just tell each JVM instance that it may use all of the memory on the system, and then let the OS kill the first VM that allocates more than the system has to offer? Would this get us the same semantics as those of a native application? Or does the JVM preallocate all of the memory that it is allowed to use?


The JVM allocates at start a portion, takes what it needs whenever it needs until the max, but never releases memory back to the OS. So in a given moment a 6GB vm is a 5GB process but internally is using just 3GB.


The standard JVMs do give memory back. The standard settings are not very friendly to do so but it does work.


As of one of the most recent Java releases, I believe the G1 GC does this on Windows. G1 is not the default but can be selected with a single command line flag.


A native application on what OS?


Most of the articles I see on C performance tuning are generalizable to any programming language, GC or no. Stuff like maintaining cache coherency, avoiding false sharing for threaded code, etc.

I don't write java, but my impression is the articles being talked about are much more java specific than the C ones are (C specific).


Cache locality, not coherency.


Sometimes GCs require a lot of tuning, but I'd bet that 90% of software written in GC'd languages works just fine, without touching the GC settings.

In my limited experience writing performance-critical Python code the improvements always came from choosing better libraries (eg for faster serializations) or improving our own code. The GC never showed up in profiling as an issue for us.


Python is so slow its pseudo-GC is never the bottleneck anyway.


This is all true for some languages but I think it it's far from universal. It seems to me that in many of the most frequently used and taught modern languages, the nearest most devs will come to being concerned with GC is an awareness of why object allocation should be minimized. For their purposes, that is a much better use of time than giving any consideration to GC tuning.


Go's GC has a single setting that can be tuned: https://golang.org/pkg/runtime/debug/#SetGCPercent

And yet, it performs great, including predictable STW latencies, all with a relatively simple and straightforward algorithm. With that in mind, the Java GC's manifold ways of tuning the GC in all its aspects for minimal performance improvements sound more like something that was purposely built as something people can build their livelihood upon by providing consulting services, rather than something that was built for the best performance possible for everyone.


"GCs are complex and require lots of end-user tuning --"

And they tend to be very memory hungry. Often, the memory overhead is the difference between running a program or having a bunch of browser's tabs open.


The GCs used in go or crystal tend not to eat memory more than 10% more than the peak memory of a c implementation. The perception of GC == hundreds of MBs of memory usage comes from java, where the GC aggressively preallocates, and has the memory baggage of a whole vm too. I rarely see crystal or go programs use excesses of memory.


I can't talk about Crystal and Go because the lack of experience with them. But the described issue is not only for Java (i.e I have seen it in Node too).

Since memory deallocation is not deterministic, there have to be a tradeoff between lazy scheduling (which increase memory consumption) or frequent scheduling (which has a performance overhead).

You can do a fine tuning between those variables but that means that a high performant with a low memory footprint system is a very challenging thing to make using a tracing garbage collection (the ones in Java and Node).


Java needs a sophisticated GC particularly because its OO model requires that it allocate an extraordinary amount of small objects, especially as things are boxed and unboxed. Ruby, one of the few true "everything is an object" languages, also suffers from an explosion of tiny objects. Node.js/V8 seems to suffer from this to a lesser extent, although it also ends up being a very memory-hungry language.

Go has been able to perform well with a simple GC because it doesn't suffer from this problem.


"GCs are complex and require lots of end-user tuning -- just look at the performance articles on Java."

These articles are bullshit. Most settings are either obsolete or forcing the default. The rest is just useless.

I spent months doing performance tuning of applications stacks which were using Java (for app, database or both). Most of the settings are useless and barely change +-1% in performance.

The JVM has had good defaults for a while. The only thing one MUST configure is the -Xmn and -Xmx options to set the maximum amount of memory allocated to the java process (both settings to the same value).


> If you're talking about huge projects with heavy performance constraints (os kernels, AAA games and browsers come to mind)

Actually two of your three examples are no longer correct: game engines often use a core GCd heap because that's how Unreal Engine works since v3, and Chrome has switched to using garbage collection in the core Blink renderer as well. The GC project is called oilpan.

The benefits of GC are so huge, that they're used even for very latency and resource sensitive apps like browsers and AAA games.


In the era of cloud computing, memory usage and performance are as important as they have ever been. If you can rent a smaller instance to do the same job that it really money savings.


> Just face it: there is an need for something intermediate to fill the gap of a script-like, native compiled, low-overhead, modern language, and a GC is part of this. The popularity and "I want to be cool so I hate it" trend of Go proves this, but the devops space is getting new useful cool toys at a breakneck speed, pretty much exclusively written in Go.

Could the answer be lbstanza when it gets there? Lbstanza.org


> the devops space is getting new useful cool toys at a breakneck speed, pretty much exclusively written in Go.

This is why I came to peace with Go's way of life.

Way better to push for less code being written in C, than argue about the language design decisions.


Manual or deterministic memory management might be a must-have for certain usage domains, but for any domain in which one would be using ruby, this seems unlikely, and presumably one could FFI into C when this is the case. There are hardly any languages commonly used in industry which don't have GC (essentially just C/C++). And many of these garbage-collected languages are capable of blazingly fast code with a small memory footprint.

Regardless, for a language which is meant to operate in the same domain as ruby and be as easy and declarative, not having a GC would be a puzzling decision.

As a side note, I'm curious what areas you are programming in where the presence of a GC is such a downside. Having written almost exclusively in garbage-collected languages over the last few years, it's something I almost never think about (and happy not to). Of course I don't deny that stricter memory control is sometimes necessary.


Crystal seems to be targeted at a domain where ruby is not fast enough. That includes domains where GC is a problem.

A tracing GC means that you either have to deal with potentially long GC pauses or you need a lot of extra free memory at all times to give the GC time to catch up before running out of memory [1].

Go says it can achieve 10ms max pause time using 20% of your CPU cores provided you give it 100% extra memory. In other words, memory utilisation must be kept below 50%.

Cloud/VPS prices scale roughly linearily with memory usage. So using a tracing GC doubles your hardeware costs. Whether or not that is cheap depends entirely on what share of your costs is hardware cost and how much productivity gain you expect from using a tracing GC.

I would be very interested in learning how much CPU and memory overhead Swift's reference counting has, because in terms of productivity Swift is certainly competitive compared to languages using a tracing GC.

[1] Azul can do pauseless, but I don't know exactly what tradeoffs their approach makes. Their price is too high for me to even care.


Note though that a lot of the problems with GC in crystal can be worked around by replacing classes with structs. The latter are passed by value and allocated on the stack. There is also access to pointers and manual allocation if that should be needed (though that will end up with roughly the same lack of memory safety as in C) to optimize a hotspot.


For the JVM Shenandoah GC [1] can do so as well (or at least very low consistent pauses) and is available via EA builds or the OpenJDK in fedora 24 [2].

This is with pointer happy java code, not with special effort to have pointer less data.

[1] http://openjdk.java.net/jeps/189 [2] https://fedoraproject.org/wiki/Changes/Shenandoah


> The key to performing concurrent evacuation is having the Java Threads and the GC threads agree on the location of objects. This is accomplished in Shenandoah by the use of a Brooks forwarding pointer. All reads by the Java Threads indirect through this forwarding pointer. All writes to objects in targeted regions must first copy the object and then write to the object in its new location.

I'm a bit surprised that indirection is efficient enough to be worth the trouble (since you need reads and writes to branch for the indirected-object case), but I can't argue with results.


Bear in mind that's a conceptual description. The read/write barriers are implemented as compiler node graphs and thus can be optimised.


If you are on a server do you need 10ms max pause time? For most applications running go on a remote machine, 25ms should be in the realm of acceptable.



True, but I don't know how much that would buy us in terms of memory utilisation and CPU usage.


People want non-GC language because everyone already has GC language that their are comfortable with.

So basically C/C++ replacement is the only niche that is left to fill. It would be even better if new language could replace even GC-languages, so I can can write fast low level libraries or websites in single language, without sacrificing productivity. That would be the Holy Grail I guess.


On the other hand, Go has an excellent garbage collector, and really seems to fill the niche for low-level libraries and programs.

It's also quite a joy to program in, and I'm a JavaScript developer, so I'm coming from the other side of the spectrum.


I'd argue that Go is much closer to Javascript than it is to C/C++.


I think Niklaus Wirth would disagree.


Have you ever used a language with a decent type system though? I couldn't stand to use Javascript or Go at this point.


Of course. I've used C pretty extensively, Java enough to hate it, and C#.

I consider myself a javascript developer because that's what I've done my best work in and that's what I enjoy the most.


I wouldn't really consider any of those decent (certainly not C or Java), unless C# got sum types when I wasn't looking.


He said decent type system, not static one.


How can someone enjoy javascript ? Even closures are not done right and now they added classes. Just what we need sigh


I agree, classes are silly in JavaScript. It just masks the prototype and creates ambiguity; using the prototype effectively is part of being a good JavaScript developer.

I understand the hate and everything but honestly I think it presents a fun and refreshing way of solving problems.

Also npm is pretty awesome, aside from how massive the node_modules folder gets.


C, Java, and C# have decent type systems, especially compared to JavaScript.


They wouldn't say C is "weakly typed" if its type system was decent :)


No sum types, no higher-kinded types, huge verbosity in declaring a new type. I'm talking something like OCaml/F#/Scala/Haskell.


The type systems of Java and C# are OK. C is not even in the same league.


One of the largest areas of concern is for real-time systems (systems which fail if they do not respond within some small time threshold). Most GC involves stopping the world to perform the GC which can pause your program's execution for some number of milliseconds. If GC pauses exceed your real-time requirements, you're out of luck.

Some languages, like erlang, do slightly better by garbage collecting erlang processes individually, so other erlang processes can continue running during GC.


For hard realtime systems it actually doesn't matter anymore. These are mostly implemented with a "don't allocate at all" strategy, since every allocation is not determistic. Therefore things are mostly statically allocated and bounded. And maybe there are some objects pools are around. You can do this in Go in the same way as in C. The only question there is if the GC will still run in the respective languages if no allocations happen through the user (e.g. because the runtime could do allocations in the background for it's housekeeping).


Yeah, the idea that malloc/free "don't pause" can only be based on not understanding how mallocs actually work. Advanced mallocs are often even multi-threaded and do some work on background threads.

Hard-realtime is always "allocate everything up front". It has to be. Allocation of dynamic sizes is not a problem you can make fully deterministic.


This is known, but it's always possible to overcome these situations and its part of the language maturing. Golang has already had its run with optimizing their GC for real-time systems. Twitch uses Golang for their IRC chat, and they've taken the Golang GC on a journey which you can read about here: https://blog.twitch.tv/gos-march-to-low-latency-gc-a6fa96f06...

Crystal will at some point also be forced to optimize their GC for these cases, although it currently uses an out-of-the-box GC called Boehm-Demers-Weiser conservative garbage collector http://www.hboehm.info/gc/ which they have acknowledged they need to replace sooner or later.


The parent comment is referring to hard real-time systems (where not responding within a certain timeframe would lead to catastrophy). We're talking things like pacemakers, anti-lock brakes, industrial control systems.

Regardless of how good GC is you would never use it in a hard real-time system because it is non-deterministic. IRC chat is only soft real-time.


Those kinds of systems won't get compilers for something other than C/C++ or maybe Ada for a long time. Usually you're stuck with a compiler from the chip vendor that kinda-sorta supports C.


Indeed, and you'll probably also need a special-purpose RT OS stack (or have to write your own/go without).

EDIT: I'd also add that this is such a niche[1] area of programming that expecting any mainstream language to meaningfully support it is... optimisitic and that mainstream languages shouldn't try to support it. (Soft real-time may be reasonable, but I believe that can be achieved with GC as demonstrated by the Azul JVM.)

[1] Niche, but obviously important, but perhaps not lucrative enough for anything to displace C or perhaps Ada -- given that these industries tend to be extremely conservative. (I wonder if ATS is used, though. Can't claim it's pretty, but proof seems like it would be a good thing for these systems?)


Special versions of Java also get into this niche. Check the PTC Java compilers or Websphere Real Time one from IBM.

Before Aonix got bought by PTC they had a few contracts for weapons control for missiles and battle cruisers.


In those situations you wouldn't even use malloc and simply allocate everything statically.


Just as a point of interest, I believe there are special forms of garbage collector that are suitable for hard real time systems.

The principle is to regularly use a bounded amount of time for collecting - in line with the latency requirements for the whole system. I think the relevant term is 'tick tock', as in tick - compute, tock - collect.


The thing about hard real time systems is that they must be predictable, which is quite wide term. Predictable memory utilization, predictable computation cost, predictable response time. In an attempt to at least fit into these requirements GC must be "passive", on-demand i.e. callable from code. Even with bounded collect times, number of collectings must be predictable/controlled to predict computational cost/time of code paths. And that becomes not much different from manual memory management.


> Predictable memory utilization, predictable computation cost, predictable response time.

All of these can happen with non-GCd languages through heap fragmentation (i.e. even when correctly allocating and deallocating memory, you can still end up with a fragmented heap.) Tho only way to aviod this is to avoid all dynamic allocation (which is indeed done in a lot of systems) or exclusively use memory pools instead of a traditional heap.


If there is no need for predicting memory utilization, then doesn't real time GC fit the bill? Consider all your know w/e you want execute in time T. A real time GC make sure it always execute in time 2T. For any sequence of operations.


Ah, I see what you mean. Thank you for clarifying!


Any program running on any non-realtime OS can stop for any number of milliseconds. There are a billion reasons for such interruptions like other processes wanting to run, cleanup phases internal to the OS, memory paging...

If your program stops for 50 ms, do you really care if it was because of a GC cycle or something else? If you really do care, then you are not allowed to target Linux, Windows or OS X all of which are decidedly not real time operating systems.


To answer your last question, Hard real time embedded systems are everywhere in Robotics, Aerospace, Telecommunications, Automotive, Medical devices..

The real time capabilities are not always done in pure SW, there are some FPGAs, but when you do rely on SW, you often can not afford to spend even a few milliseconds in GC. In some case, that would mean killing or maiming someone.

And you are often tied to the HW vendor toolchain for a specific DSP, MCU,.. that is only supporting C or C++. This is a domain that is moving very slowly, currently my most optimistic time table would be able to have vendor support for Rust toolchain in 10 or 15 years but I don't foresee any GC language coming to replace the critical part written today in C or C++.


Even soft real time systems like games or real time networking solutions suffer from non-deterministic GC pauses. Audio processing is another example.

You can get by in a GC'd system if you're careful not to allocate while being in the "hot path", but it's much more difficult than manual memory management (you need to know the internals of the GC algorithm) and interference from other threads may spoil your hard work.

Minecraft is a prime example of annoying (Java) GC pauses causing annoying interruptions. Another one is Kerbal Space Program's choppy audio (from C#/Mono GC). Although these games made millions or billions of dollars regardless, so you might argue it's a non-issue.

> currently my most optimistic time table would be able to have vendor support for Rust toolchain in 10 or 15 years

Not sure how much you'd need changes for the Rust compiler to be able to use it on MCUs and DSPs, but LLVM is more and more common and it might be (almost) enough to have the LLVM backend ported to the target arch. LLVM is moving fast, so for some targets it might be viable much sooner than your estimate.


It was not clear from my post, but what I would like to have is a RTOS with an implementation of the Rust std modules, to be able to develop an application on top of it. As far as I know, no one is working on that.


If you write programs with a gc there is always a level which you can't get your hands on to change. If you want to create a piece of a puzzle that is tiny, does one thing, and does it right, like a shared library, a command, a linkable object, or any tiny standalone binary, the gc is always a thorn.

Anything where memory or interactivity needs to be tightly controlled is problematic with a gc. Not only that, but a gc doesn't scale as well with lots of threads. Ultimately you need thread local allocation since you will eventually be bottlenecked by the fact that typical allocation (with malloc, VirtualAlloc, mmap, etc) is protected by a mutex, and deallocation suffers the same fate.


Except that garbage collected languages generally use per-thread nurseries, so the fast path is a small number of instructions. Having a GC also makes lock-free programming easier.


Obj-C? Or do you count ARC as a GC?


Whether reference counting is GC or not is arguing about semantics.

But correctly implemented reference counting is essentially pause-free. It's consistently "slow", which is better for some cases that unpredictably "fast".


IMO it's not just semantics, rather, GC is too general of a term if it includes ARC. In terms of performance analysis the two are vastly different. One has basically an unbounded worst case but a good average case, the other is the opposite.


Reference counting also has an unbounded worst case: what if you drop the last reference to a very large graph of objects? Then you free the whole thing, which can take an arbitrarily large amount of time.


The main difference there is: You can control the timing of when to pay that penalty. With a full blown GC, if you run into performance issues because of it, you basically have to rearchtect the whole app (with something like memory pools, which you pay by needing more RAM than strictly necessary and with a vastly more complex code). With ARC I can track down a slow memory operation to a single line of code and deal with it there (e.g. by moving the complex object into a singleton). IMO a full GC is just the wrong level of abstraction for anything that's timing relevant, which includes all UI threads.


> IMO a full GC is just the wrong level of abstraction for anything that's timing relevant, which includes all UI threads.

Hard real-time GC systems exist. In these systems, you can prove that pauses last no longer than a certain number of milliseconds. They're definitely applicable to programs with UI.

Can you prove that dropping a reference doesn't free an arbitrarily large number of objects? You can probably convince yourself in specific cases for specific programs that you don't see arbitrarily large refcount-release times, but any change you make to the code might invalidate this analysis.

A hard real-time GC stays hard real time.


I keep hearing about these, but which of the popular GC languages (read: lots of library support) have real time GC? Aren't we talking about industrial RT applications rather than GUIs?


You don't have to free the whole thing right away, you (the implementation) can free O(1) things and put the rest on a list to process later.


Any good CS book about GC includes RC as a GC algorithm.

For example, the classical "The Garbage Collection Handbook"

http://gchandbook.org/


Reference counting is GC. Simple, but GC.


I agree that to compete with Ruby ergonomics one probably needs a GC. I think part of what I'm getting at is that there are other ways to approach these problems, and aping Ruby isn't necessarily one I prefer. Not to knock Crystal, it seems very cool.

Re: application domains, I've recently been doing some work in CPU/memory constrained applications (not embedded, running big >500GB jobs on HPC clusters), and a GC is unfortunately a non-starter for this kind of data processing.

I have also been watching with great anticipation the work being done on "big data" processing with Rust (https://github.com/frankmcsherry/timely-dataflow) and how that might obviate the need for a GC with the various JVM RAM-hogs which dominate that field.

There are also many areas where people work (many of whom provide the tools that programmers of GC'd languages use for their jobs) which can't admit a garbage collector.

For example, I currently deploy Django code (running on an interpreter that needs to implement, not run on top of, a GC) to a machine with a Linux kernel, running nginx, backed by another machine running PostgreSQL, with caching in Redis. None of those very important tools can reasonably offer the performance needed in a garbage collected language.

For another example, I'm typing this (quite lengthy) response in a low-latency application (a browser) which would also be difficult to implement in a garbage-collected language.


Er, browsers are often implemented in GC languages.

Firefox does the entire UI in XUL and Javascript: all GCd.

Chrome uses a GCd heap for much of its C++ code, and of course the web pages themselves are fully GCd.


for any domain using ruby?

you mean like webservers, where GC has been the #1 cause of operational problems for essentially forever?


About GC: it would be nice if there would be some kind of standard-ish implementation framework for a GC in an LLVM language.

LLVM has been enabling fantastic new programming languages, and while it has support for a GC, I have not found a GC library that would be easy to embed in a new compiler/runtime environment.

Now there are dozens of LLVM-based languages (or language prototypes) that have different, incompatible implementations of GC with varying degrees of quality. If there was a relatively simple but efficient GC available, it would be much easier to implement a new language on LLVM.

At one point there was a project called HLVM, but it was targetted at implementing JVM and .NET -style virtual machines. This is not what I'm looking for and I think the project is dead now.

If anyone knows about a GC implementation for LLVM, I'd really like to take a look. If it's a part of a programming language project but would be relatively easy to rip out of the rest of the compiler/runtime, it's not a problem.

That said, I prefer languages without GC.


It is very hard to have a general purpose GC library, because the best GC algorithms require a tight cooperation between compiler, GC and language semantics.

For me, the only viable alternative to GC are substructural type systems like in Rust's case.


You're definitely right and it's not an easy task.

However, I think there's a sweet spot where you could implement a fairly nice boilerplate/framework that would be an 80% solution to the problem which would be a vast improvement over the current state.

The missing 20% would be language specifics and that would be either solved by forking the boilerplate code or writing some kind of callbacks for discovering references given a root object.

edit: Additionally, there's no simple example of using a GC with LLVM. It would be very helpful if there was, for example, a GC'd version of the Kaleidoscope language used in the LLVM tutorials. Even a trivial Lisp-style cons/car/cdr object system coupled with the simplest possible mark'n'sweep GC would be good.


You should check projects like the The Mu Micro Virtual Machine and Eclipse OMR

http://microvm.github.io/

http://projects.eclipse.org/projects/technology.omr

The guys behind The Mu Micro Virtual Machine were the ones that did the L4 OS formal verification.

Eclipse OMR is based on IBM's J9, they are making it language agnostic to support PHP, Python, Ruby and whatever else one can think of.

However they aren't LLVM based and the caveats of a generic GC do still apply.

I also agree with you, the problem is how much those 20% actually are in terms of effort.

EDIT: typo where => were, on => one


The Boehm collector (http://www.hboehm.info/gc/) appears to offer what you're describing. In fact, it looks like it's what Crystal added in 2013 to implement its original garbage collection:

https://crystal-lang.org/2013/12/05/garbage-collector.html


I'm familiar with it but it's not at all what I'm looking for. It's a library to "retrofit" GC into a C or other native program.

I'm looking for something to plug into LLVM's GC mechanisms to be used for new languages.


"About GC: it would be nice if there would be some kind of standard-ish implementation framework for a GC in an LLVM language."

Not quite LLVM, but take a look at the Eclipse OMR project.

OMR intends to provide a set of reuseable components like a GC, port-library and given more effort a jit to be reused into existing language runtimes or build a whole new language out of them.

https://github.com/eclipse/omr


I guess the question is, by the time you added high quality GC to LLVM, how different would it be to the JVM or .NET?

Bear in mind that .NET can do AOT compilation and the JVM is getting it (and some other non-OpenJDK JVMs already have it).


What do you mean by "sophisticated metaprogramming"? Rust has pretty sophisticated (sometimes I wish it was less) macro / compiler-plugin support.


Perhaps that's not the right way to phrase it. I guess I'm mostly thinking of compiler plug-ins which are very unstable right now. Which means that most users probably never write procedural macros in their own code. (I certainly don't)


There's also syntex (https://crates.io/crates/syntex), which basically provides compiler plugins for stable rust. It does so via code generation though.


I have no problem with GC, but I want to see reasonably complicated benchmarks that actually show that it's "fast as C". Because I don't believe that at all.


Are you saying the GC is as fast as C? I bet it's not. That being said, the programs I've built in crystal "feel" very fast, here are a few random performance tests, if you're asking about overall performance:

https://groups.google.com/forum/?fromgroups#!topic/crystal-l...

https://crystal-lang.org/2016/07/15/fibonacci-benchmark.html


"but I want to see reasonably complicated benchmarks"

I very much doubt that a fib code-snippet is what he was asking about.


The biggest problem with GC, it seems, is some sort of non-determinism that it introduces in the program's behavior. Otherwise, garbage collection, being 'lazy', is, in fact, more efficient way of releasing unused memory compared to how it is usually done in C and, especially, C++, where memory is released 'eagerly' (e.g. as part of the destructor), thus wasting precious machine cycles on something that may not be even necessary at all.


> wasting precious machine cycles on something that may not be even necessary at all.

I'm not familiar with very many scenarios where one has a garbage collector but doesn't need to free some piece of memory when it's no longer used. Could you clarify what you mean here?


For most GC algorithms (but not ref-counting), the time complexity is O(N) where N is the number of surviving objects (or it could be the number of surviving edges/references, I forgot!). For manual/deterministic/eager memory management, the complexity is O(N) where N is the number of allocated/freed objects.

So if the number of survivors << the number of allocated objects, which it always is in many functional languages, then GC can be faster than manual memory management. Especially if you use a copying GC algorithm which makes allocation extremely cheap.


This is true for those compute job types which are mostly 'CPU bound' and which usually create most of the objects that they need at the very beginning; these objects would not be released until the job is finished anyway. I admit that in this case it may take some thought and deliberate effort on the programmer's part to avoid creating many short-lived objects.


Nim[0] takes an interesting approach. It uses "deferred reference counting", effectively allowing GC cleanup to be spread out over a period of time. This at least helps with GC pauses.

It also seems to allow tweaking for soft realtime systems, e.g. games.

[0]http://nim-lang.org/docs/gc.html


Rust already lost most of its interesting sigils. I'm curious which sigils that are left you think should go away?


I've never understood people's abhorrence of sigils. They are useful shorthand for very specific concepts. It's like hating apostrophes in English. You can do away with them entirely, but I don't think most people would consider that an improvement.

Now, the greater density of concepts shorthand notation can be abused, and too much of that often shifts the cost benefit ratio further to the cost side for all but the most expert in the language, but that's a problem of too much, not on inherent with their use at all.


I don't personally think any of them should go away. I just note that many newcomers to the language feel they're opaque. Having written a good bit of rust now I do sometimes find the sigils impact readability (especially in macro_rules).


I think it's target is more along the space where "go" is, as little code as possible (like a scripting language), but still speedy. So in true scripting language form, you don't have to worry about collecting your objects ever. I'm not aware of any benchmarks on the cost/hit of this, I do know it uses the BDW GC which is hopefully pretty battle tested...


There seems to be a lot of GC hate in this thread. I wonder how many people are aware that Unreal Engine, the dominant multiplatform game engine used to create the majority of AAA games, uses a GC.


Yeah it's weird how Rust has suddenly made me look for no GC languages everywhere I look. It opened up a whole new desire to not accept no for an answer in that regard. I have this burning thought in the back of my head that there just has to be a simpler way to offer it than Rust does it too.


Memory management is difficult, extremely difficult, to get correct in the way rust does. I don't think I've seen a leak or bad dereference in years. The only way it really manages this is by tying references into what amounts to a proof assistant. Every simpler method of which I can think either sacrifices capability (e.g. no references at all; only raii + copy on write) or it becomes a GC with all its wonderful trade offs.

GC isn't terrible, though. Azul has struck an amazing balance between latency and eagerness—even if you can't afford it the technology does exist. If you don't have latency, memory restrictions, or embedding requirements, rust may be overkill.


> Memory management is difficult, extremely difficult, to get correct in the way rust does

Memory management isn't hard --- you just need to pay attention to detail and not say "YOLO, let's abort on OOM" like the Rust stdlib does. Rust is an unacceptable language for anyone who cares about robustly responding to heap exhaustion.


"Memory management isn't hard --- you just need to pay attention to detail"

You're quite right. The problem is that every bit of attention you spend on that detail is attention that you're not spending on details that are actually solving your problem.

I programmed in C for decades. I do not miss malloc() and free() in the least.

(I still do use C when the situation warrants, but the situations where it is warranted are becoming rarer and rarer with each passing year).


> If you don't have latency, memory restrictions, or embedding requirements

Add power consumption to that list.


I think you meant

-> 'extreymely difficult to get INcorrect in the way rust does'


Unless this is a joke, its meaning is correct, i.e. "proven correct like Rust".


I think he misunderstood the comment to mean "it's hard to get memory allocation wrong when writing a Rust program".


The D Language (http://dlang.org/) has a @nogc pragma (https://dlang.org/spec/attribute.html#nogc).

You just annotate functions that you don't want to use the GC in with it and it'll assert that they don't use it.


Since you can:

- disable the GC

- deregister threads so that they are not stopped by GC

- eventually avoid the runtime altogether

There really is no realtime system that D can't do.

The whole anti-GC thing is a giant strawman that consider all GC stop-the-world, unavoidable, and overarching. Academia decided in favor of GC decades ago, and industry has been following suit for good reasons: mental overhead associated with finding owners to everything.


It is not. Languages that are designed to run a GC get crippled when run without: depending on the language you might lose access to closures (if dynamically allocated), rich data structures (standard dictionaries, lists, etc), sometimes even aggregates (if boxed by default) and you have to resort to using arrays of primitive types.

As they say, you can write FORTRAN in any language, but you don't necessarily want to (this is unfair to modern fortran which I hear is actually a decent language).


You speak as if a GC is unconditionally better than alternatives and it is a solved problem but using a GC has issues as well.

On the theoretical side, not reasoning about ownership means sharing data betweent threads is done with copies (slower) or locking (slower and error prone); if you know about ownership you can share references to data while it can't be mutated for free.

Ownership is also important for any non-memory resource (file handles, mutexes, etc). GCs release those "whenever", maybe never, unless you close manually.

And even though manual memory management has some small non-deterninistic overhead for heap coalescing (which one can usually work around with pools), most GCs I've worked with add measurable overhead. This equates to more cost per server, more load, more battery life drained, higher response times...


> On the theoretical side, not reasoning about ownership means sharing data betweent threads is done with copies (slower) or locking (slower and error prone); if you know about ownership you can share references to data while it can't be mutated for free.

I don't think it follows and it's rather the reverse: it what I share has a global owner (ie. the GC), I don't have to lock or copy by definition: once it stops being reachable it will be collected. That's why some lockfree algorithms are enabled by the GC. With ownership you would have to have a unique owner, or reference counts. GC does require write barriers or stop-the-world though so let's say it's a draw :)

> Ownership is also important for any non-memory resource (file handles, mutexes, etc). GCs release those "whenever", maybe never, unless you close manually.

Yeah, it's a big problem that the GC even attempts poorly to close them. But D has scope guards and RAII builtin so for the 50% of non-memory resources you still have to think about ownership indeed. That's more complicated that the C++ situation. But realtime it does not prevent, you may well find yourself having more time to optimize :)


> if what I share has a global owner (ie. the GC), I don't have to lock or copy by definition

Then how you do avoid data races? Two shared references which can mutate your shared data requires either a copy, a lock, immutability, or a single writer.


I use "parallel foreach", sometimes worker queues, implicit single writer... like in C++. It sounds like you think only Rust-style ownership can avoid data-races. Sure, if you want the type system to do it. For me discipline is enough and I've seen it work in teams too. Not seeing such a problem really.


Nothing specific to Rust, although Rust encodes in the type system what I usually have to keep track of mentally, which is nice.

Trivially parallel algorithms do benefit from constructs like "parallel foreach" and implicit single writers, but in general, one either has to stick to those models (where the cognitive overhead is low but manageable) or if one ventures into more complex territory, one has to either deal with a higher mental complexity (ownership or locks), performance degradation (copying), or immutable data (if it fits your problem and doesn't decrease performance, win-win).

My argument is simply that GC doesn't fix everything, and the mental overhead of tracking ownership of memory (to me) isn't a huge burden, especially since I have to do it for non-memory resources and memory resources shared between threads already.

I'm not against a GC - I like languages that mix GC and non-GC side-by-side - because sometimes I do want to just forget about my memory, but only if it fits my problem domain. But I don't think GC beats non-GC hands-down for-all-cases.


But then you lose memory safety


That's being fixed.


Wouldn't that cause duplication of interfaces? Rust had that problem when it went down having no-gc and having gc, too.


D's strong support for templates makes that a non-issue.


Yes. I'm currently trying to think of a good way to add it to Myrddin, which currently makes it more or less manual. Doing it in a simple way is a tough problem.

The simplest solution is to add the moral equivalent of 'null' -- objects that transition to an idempotently destructable state, which solves a lot of complexity with the data flow and analysis (yay!) at the cost of some safety (boo), and nulls (louder boo).


My language (Lily) handles the problem by trying to avoid the gc where it can.

Lily is statically-typed, built-in classes can't be inherited from, and there's no C-like casting.

With those rules in mind, most objects can't become cyclical. It's impossible for a list of strings to loop back onto itself, for example. It helps that the value classes backing enums (like Option and Either) are immutable, which I so far suspect prevents a cycle.

That at least allows you to group classes into three groups:

These never cycle (Integer)

These may cycle (List)

These always cycle (Dynamic, linked lists?)


One of the goals I have is to keep the required runtime absolutely minimal, as well -- I'm ok with the compiler inserting some user-defined code in the appropriate places to initialize or release values, but I'd like to avoid growing the required code in https://github.com/oridb/mc/tree/master/rt unless it's absolutely necessary.

And yes, that ~60 lines per platform is really all that's needed. (And actually, I should be able to merge more of it for SysV platforms.)

So, I've thought about a GC, but I'd really prefer not to have it.


Can I just say that Lily is amazingly neat, and is exactly the language I was working on myself! Seems we have a shared delusion ;)


The claim is "fast as C", so I was surprised that the performance comparison was with Ruby, not with C. On my machine, the Ruby Fibonacci program executes in 47.5s, while a corresponding C program executes in 0.88s - that's a factor 54 difference, while the article reports a factor 35 for Crystal. That's good, but what causes the difference? This benchmark is pretty much all function call overhead, so I doubt it's representative of real performance-sensitive code.

The Crystal website itself makes a more modest claim than "fast as C" under its language goals: "Compile to efficient native code", which it clearly does.


All of these "fast as C" claims about modern, high-level Python-like languages (be they statically typed and natively compiled) are missing the point. It is mostly the minimalistic and terse programming style that C encourages that makes C programs performant. You avoid allocations wherever possible, you write your own custom allocators and memory pools for frequently allocated objects, you avoid copying stuff as much as possible. You craft your own data structures suited for the problem at hand, rather than using the standard "one size fits all" ones. Compare that to the "new this, new that" style of programming that's prevalent today.


There actually is one high-level Python-like language that really is almost "as fast as C".

http://nim-lang.org

I am using it for years already, and it is really performant, somewhere between C and Rust. I am still wondering why so few people use it.

Benchmark: https://github.com/kostya/benchmarks

Nim vs Rust: http://arthurtw.github.io/2015/01/12/quick-comparison-nim-vs...

Performance discussion: http://forum.nim-lang.org/t/2261

Embedded Nim: https://hookrace.net/blog/nim-binary-size/

Nim on LLVM: https://github.com/arnetheduck/nlvm


> I am still wondering why so few people use it.

While Nim is my favorite language, I can understand that it has a small userbase, for these reasons:

1. No major backer like Google for Go or Mozilla for Rust

2. No killer feature like "memory safety and performance without GC" for Rust, instead a mix of all the reasonable down-to-earth features I want in a programming language

3. Some unique decisions instead of what you're used to from other languages, for example partial case sensitivity


4. The fact that operator order is changed by the amount of white space between symbols

                2+2 * 5 = 20
                2 + 2 * 5 = 12


I am a strong proponent of Nim but this is probably the worst idea I have ever encountered in language development. Honestly!

Partial case sensitivity and the special underscore case are features I can live with. Unfortunately this has actually become a stumbling block for a wider adoption of Nim.

All strange special features should be optional, not default.


What makes you think this is default? It most certainly is not and will be removed completely in the future.

Edit: here is a source: http://nim-lang.org/docs/manual.html#syntax-strong-spaces ("... if the experimental parser directive #?strongSpaces is used..."). The last time this was discussed I said that it would be removed completely, and I still believe it will be. It's simply not a priority for us right now.


> What makes you think this is default?

Some of those features are default in Nim, some (strongspaces) are not. I say that all such weird features should be optional in general so that newcomers don't get scared off.

Also case and underscore should work like in C per default since Nim interoperates with C seamlessly anyway. Case insensitivity and ignoring underscore are ok if optional.


If you're consistent about how you space your infix operators, this will have no impact on your code.

If you put space around some operators and not around others, in a way that doesn't correspond to precedence, you're going to confuse anyone who reads your code, in any language.


Being confused by code is no excuse for code not doing what ya know... every other programming language has done for the past 50 years.

I get that trying new things, but somethings are pretty well agreed upon.


damn... I actually like that a lot!


I like it for distinguishing homonym operators, but not for the precedence stuff they seem to have there. I'd like something like this though:

  let a = 10;
  a / 5
  output> 2
  let b = pwd();
  b/temp
  output> Directory<"~/temp">
  b / 2
  error> b:Directory does not implement method "divide(:number)"
  a/temp
  error> a:int does not implement method "get(:string)"


I used both nim (back when it was still nimrod) and rust for a while, before eventually settling on rust. I tried to give nim a chance, and was told that "they will grow on you" ("they" being the things you mentioned that were "unique decisions instead of what you're used to from other languages"). They never did, and though I got used to avoiding the problems I initially had with them, the language just never "felt good" to me.


wow... have never heard of that partial case sensitivity before.

I think this goes beyond syntactic sugar. Holding the hand of the developer too much?

Personally, as a Python programmer I like interfacing with C++ code like Qt via PyQt. If I see a camelCase method I know where it came from, but if I see a PEP-8 style name or method I know it's our own code, not from Qt.


Shameless plug: is it considered totally uncool in 2016 for one to be developing a memory-unsafe, manual MM, non-OO, thread-denying language that preserves most of the C semantics?

https://github.com/bbu/quaint-lang


No, definitely not!

I'd be very interested in a language that is roughly as low level as C, but has some obvious warts "fixed" while still being able to run on bare metal or with a minimal runtime system. I also don't care about a standard lib as long as I can call open(), close(), read(), write(), socket(), etc.

Native threads is another requirement for me.

Things I'd like to see in a language:

- compile to native executable

- type inference

- module system without header files

- easy to call into native C code, and export functions so they can be called from C or any other language

- first class SIMD structures (this is missing from Rust!), so that you don't have to duplicate code for sin4f and sin8f (which would be line-by-line equal, except types)

- perhaps some kind of modern polymorphism (ie. not class based OOP)

- can target GPUs via LLVM or SPIR-V

- memory safety is optional, but nice to have. I'd be mostly interested in using this kind of language for GPU kernels and tight inner loops, where you wouldn't be allocating anyways

I have a bunch of design ideas and prototypes in my drawer waiting for a lot of free time and inspiration appearing.

I like my tools sharp, even if it means there's going to be blood occasionally.


My next big endeavour with Quaint will be to create a clean module and linking system (without header files or any textual inclusions). Each source file will be transformed to a corresponding unit which contains code, data and exported type definitions. The linker would then merge these units and produce a native executable that runs your program in the self-hosted VM which will be a part of that executable. Pure native compilation or LLVM integration is too much of a hassle for me at this point.

One of the virtues of the language would also be the direct correspondence between the HLL code and the emitted VM instructions, without any optimisation passes. This makes it much easier to reason about code performance and to write code which performs consistently and predictably (albeit a bit slower).


Nim fits everything you ask, except for "can target GPUs via LLVM or SPIR-V". Even that may eventually be fixed by having OpenCL C as a compilation target.

Also, I am not sure what you mean by "first class SIMD structures", but you can definitely have a single definition for sin4f and sin8f if they are line by line equal except types, by using union types.


Nim is definitely on my short list of languages to learn, however...

Targetting GPUs is a deal-breaker. I'm sure the Nim compiler would be pretty easy to retarget to GPUs via SPIR-V (the new binary IR for Vulkan/OpenCL shaders and kernels) or OpenCL/CUDA C. But I don't think that would work for Nim's runtime system or existing Nim libraries (including any standard libs it has).

Also Nim's pauseless low latency automatic memory management (I guess you can call it a "GC") is very interesting but it's not what I'm after.

> Also, I am not sure what you mean by "first class SIMD structures",

I mean this:

    def multiply_and_add(a : <n x f32>, b : <n x f32>, c : <n x f32>) : <n x f32> {
        return (a*b) + c;
        // TODO: figure out how to use "madd" from FMA4 or NEON instruction set
    }
The trivial piece of code above should be "generic" so that it can be called with any width of vector.

Now the example above is very trivial but more complex examples might have challenges for correct implementation of the type checker. In particular, doing vector shuffles (ie. equivalent __builtin_shufflevector in GCC/Clang vector extensions) would need to have a strange type. Shader languages typically use a syntax like `myvector.wxzy`, which might work.

This might perhaps be possible with an ungodly mess of C++ templates and explicit template specialization for each vector type (and hoping that the compiler is aggressive enough in inlining). But I'm not really a fan of template-heavy C++.

In fact, the kind of solution I've been thinking about would be semantically similar to what I'd do with C++ templates.

> but you can definitely have a single definition for sin4f and sin8f if they are line by line equal except types, by using union types.

I'm not familiar enough with Nim's union types to be sure, but my guess is that this would not compile to efficient low level code apart from the most trivial of circumstances. This is my (not very) educated guess based on other high level languages with some concept of union types.

Anyway, Nim is a very cool language that I will check out sometime in the near future. It just isn't what I'm looking for my very specific use case.


A union type in Nim can only be used in funciton arguments, and it does the obvious thing: when you actually call the function, it specializes to the type you are calling with. Think about templates in C++, where the type parameter can only assume one of two (or more) values. Hence it would generate exactly what you would write by hand, but the syntax is much less messy than C++ templates


Zig (https://github.com/andrewrk/zig), "a system programming language intended to replace C", has a lot of those features.


You might also be interested in Jai [0] which has many of those things but is not a 'real language' yet or possibly ever. Lots of interesting ideas though.

[0]: https://github.com/BSVino/JaiPrimer/blob/master/JaiPrimer.md


You've almost described ISPC verbatim. If you don't already know about it you might want to check it out. http://ispc.github.io/index.html

It doesn't compile to a native executable, but since it produces .o files you should be able to just set your entry point and go from there.


> You've almost described ISPC verbatim.

Thanks, I've read about it before, but haven't spent too much time looking at it.

However, this "single program, multiple data" isn't exactly what I'm looking for (it would solve the sin4f vs. sin8f issue mentioned above, though). I need explicit, low level access to SIMD, coupled with genericity over vector widths. This means doing almost assembly-style SIMD code with explicit shuffles, blending, etc as well as access to intrinsics where needed.

I also need portability (ispc is from Intel, it probably doesn't support ARM NEON) and targetting GPUs.

I'm very well aware that my needs are very specific. I need to do math stuff for 3d graphics and physics applications.

All I need is for a lot of free time to appear from out of nowhere and I can write a prototype compiler for this myself :)


How would you use specific instructions yet have widths abstracted?

ISPC actually has some preliminary support for targeting Nvidia PTX btw. It compiles using LLVM.


See example above in this thread. In C + GCC vector extensions, I just use normal arithmetic operations (+, -, *, /).

However, when using specific intrinsics they are for a specific width. It might take some "library code" to take advantage of some instructions like dot products, etc.


Julia has all these things


Yes. Memory unsafety doesn't work.


I love and use Nim (in production). I wrote about why I don't think it's gained wide adoption in a previous post here: https://news.ycombinator.com/item?id=11960814

The only thing I would add would be that compared to Ruby, Nim still takes you quite a while to put something together, so defaulting to Ruby isn't necessarily a great idea.


The first benchmark ("Nim vs Rust") I looked at says

> Rust regex! runs faster than Regex

which is a very old claim - Regex should now be much faster than regex! ever was. Any pre-1.0 Rust benchmarks are probably wrong (to be fair, most benchmarks are probably wrong anyway).


All the benchmarks you posted suggest that Rust is faster than Nim, not the other way around. Did you have something else in mind?


A lot of them did show the other way, then came to Rust people's attention and improvements were submitted. As well as both languages' implementations changing over time.


Just a small note for anyone reading the "Nim vs Rust" post; today, regex! is much slower than Regex::new.


> You avoid allocations wherever possible

If you don't have to write cutting-edge games or embedded software for tiny systems, why do you have to care about allocations at all? Today's systems and RAM's are so fast that garbage collections don't really matter in most cases. Consider SBCL (compiled Common Lisp) which is almost as performant as Java and C++.

http://benchmarksgame.alioth.debian.org/u64q/lisp.html

I used to develop software in C and C++ for many years, and a garbage collector was the thing I wanted the most. GC-free programming is unnecessarily tough in most cases, except you desperately need it for games and embedded systems.


> If you don't have to write cutting-edge games or embedded software for tiny systems, why do you have to care about allocations at all? Today's systems and RAM's are so fast

What? RAM is not fast at all, the latencies have almost not improved in 20 years (compared to the improvement of other subsystems like the CPU, of course).


Because allocating something on the stack means adding a something to a pointer and puts the object somewhere that is likely to be in the cache during the function and allocating something on the heap usually takes a tree traversal and puts the object far away from the other stuff you might be using.

Also, using numbers from a benchmark game is not representative of the performance of real world applications. If you look at the code, you'll find that it's written in a style that avoids heap objects and GC wherever possible. Forcing heap allocation is what makes Java slightly slower than C in many cases.


You must worry about allocations anytime you write CPU-bound software which wants to maximise the output per clock cycle.

Granted, most software isn't like that, but it's certainly not only games and embedded software on tiny systems.


RAII and smart pointers go a long way towards eliminating the need for GC in c++.


True, but usually one doesn't have control over the code others write to force them to use such tools.


Nitpick: everything you say is probably correct, but such performant C programming is also the very opposite of a "minimalistic and terse style".

Which one is more minimalistic, 'new Foo' or a collection of various custom-tuned allocation methods? Which one is more terse, 'myList.Where(foo).Select(bar).Aggregate(baz)' or an explicit for loop?


It is minimalistic in the sense that the language provides a narrow set of primitives and a skilled programmer combines these primitives in the most sensible way to solve the problem at hand. Higher level stuff in most other languages is much more generic.

Indeed, it may not be minimalistic in terms of the code size.


> All of these "fast as C" claims about modern, high-level Python-like languages (be they statically typed and natively compiled) are missing the point. It is mostly the minimalistic and terse programming style that C encourages that makes C programs performant. You avoid allocations wherever possible, you write your own custom allocators and memory pools for frequently allocated objects, you avoid copying stuff as much as possible. You craft your own data structures suited for the problem at hand, rather than using the standard "one size fits all" ones. Compare that to the "new this, new that" style of programming that's prevalent today.

Exactly! I cannot agree more.

I have a small test program I port to different languages to test the length of the code and the speed of the program. Of course it only represents a single use case.

* C is first, of course.

* twice as slow, come Pascal, D and... Crystal!

* x3 to x5, come Nim, Go, C++ (and Unicon).

* x6 to x9, come Tcl, Perl, BASIC (and Awk).

* x15 to x30, come Little, Falcon, Ruby and Python.

* x60 to x90, come Pike, C#, Bash.

* x600 to x1000, come Perl6 and Julia.

This list looks byzantine, I know :-) The trends I can get out of it:

* the last 2 are languages with JIT compilation, and that's horrid for short programs.

* the "old" interpreted (or whatever you name it nowadays) languages (Tcl, Perl) are not so bad compared to compiled languages, and much faster than "modern" one (Ruby, Python). (Again, this is only valid for my specific use.)

* compiled languages should all end up in the same ballpark, shouldn't they? Well, they don't. The more they offer nice data structures, the more you use them. The more they have some kind of functional style (I mean the tendency to create new variables all the time instead of modifying existing ones) the more you allocate and create and copy loads of data. In the end, being readable and idiomatic in those languages means being lazy and inefficient, but what's the point of using those languages if don't use what they offer? C forces you to use proper data structures and not re-use existing ones. It comes naturally. What is unnatural in C is to copy again and again the data, it is simpler to modify the existing one and work on the right parts of it, not to pass the whole chunks every time you need one single bit. In more evolved languages, compilation won't save you by doing some hypothetical magic tricks, it cannot remove the heavy continuous data copying and moving you instructed your program to do. And that is what made the difference in speed between C on one side, and D, C++, Go on the other side.


> I have a small test program I port to different languages

Please show one or more of those programs.


I'm curious where Rust fits in on your hierarchy with their emphasis on "zero cost abstractions". Of course that's more a lofty ideal than reality but it means that at least in some cases it does much better than C++. Is it a long program? Something that could be posted in a Github Gist maybe?


I don't think your list is quite right. Crystal is probably more on the tier of Go. Julia is also much faster than that, at least up there at TCL. Of course both vary a great deal depending on what you are using for.


I bet C# is using Mono.


There's some benches between various languages, including Crystal as well, for those interested https://github.com/kostya/benchmarks

EDIT: There's also this: https://github.com/nsf/pnoise


Here's a small HTTP style benchmark across various languages (a bit limited, but still interesting): https://github.com/costajob/app-servers


> I doubt it's representative of real performance-sensitive code

Two data points (one-off timings of a few lines of code doing the same work load) just don't make for a comparison we should spend time bothering about.

Whatever you think of the benchmarks game, I don't see why we need to waste time with comparisons that don't meet that low standard:

- a few different tasks

- more than a code snippet

- a few repeat measurements

- a few different workloads


From this quote it sounds like the more CPU intensive it is, the more you can expect when comparing to Ruby.

>Remember: The cake is a lie, and so are benchmarks. You won’t have 35x increase in performance all the time, but you can expect 5x or more in complex applications, more if it’s CPU intensive.


> That's good, but what causes the difference?

Could it be startup time? That's less of an issue when the application has started up.

A fibonacci application is not a very good benchmark anyway.


Looks very nice. I hope it gains momentum.

For now, if you want a fast language with the beauty and productivity of Ruby, check out Elixir [0] and its web framework, Phoenix [1]. I've been using Phoenix for a year, and it's the first framework that I've actually liked more over time. And I've been a web developer for a decade. With its recent 1.0 release, Phoenix is gaining a lot of momentum.

If you want some idea of the performance differences between Phoenix and Rails, see [2] and [3].

[0] http://elixir-lang.org/

[1] http://www.phoenixframework.org/

[2] https://github.com/mroth/phoenix-showdown/blob/master/RESULT...

[3] http://www.phoenixframework.org/blog/the-road-to-2-million-w...


Elixir is not a fast language. Not even close. Yes, it handles concurrency and parallelism beautifully which in turn enable distributed applications to perform quite well. But the language itself is significantly slower than Crystal / Rust / Go / Swift. It's not in the same category at all.

That said, it's a great language worth recommending.


> Elixir is not a fast language.

Depends what we mean by fast. I have seen Erlang VM handle 100k requests per second on a distributed cluster. That's plenty fast. Moreover, because of fault tolerance, it means ability to have a better uptime, with less people on-call. "Fast" can also be measured to include that, if system goes 200k requests per second, but crashes at midnight and stays down for a few hours, the average "speed" can be quite low. In a laptop demo that's not visible, but in practice that's money and customers lost.

But if fast means, "let's multiple some matrices", then yeah can probably use Rust or C for that. It all depends on the problem domain.


Phoenix is slightly faster than Gin (Go web framework) so depending on your use case it can be fast :). Obviously one of the primary selling points is OTP/BEAM VM with all the concurrency, HA, soft realtime etc. but with simpler syntax than Erlang.


Its only faster at IO. Once you start adding CPU bound work to the equation, Phoenix's performance will begin to drop. It's similar to Node in that regard.


True any generalisation has flaws :) Even faster is relative do you care about avg or worst case performance do you care more about throughput or latency and on and on :)


> It's similar to Node in that regard.

No it's not, it is very different from Node. Elixir/Erlang know how to take advantage of multiple CPU cores easily. Node by default doesn't.


Well it does, but it still doesn't do it efficiently. That was the only point I was making. But in general, yes, Elixir/Erlang will handle CPU bound tasks better than node whenever they can be effectively parallelized.


> Well it does, but it still doesn't do it efficiently.

Oh interesting. Does it run callbacks on multiple threads? That's new, I haven't followed it for a couple of years.


Well what I meant to say is that both JS and Elixir/Erlang aren't very efficient at using CPU resources. Elixir/Erlang does a better job at compensating for this by being able to run many operations in parallel and across different machines, which isn't something you can easily achieve (or should even attempt) with Node.


I have seen Erlang VM handle 100k requests per second on a distributed cluster

A single JVM server can do that load, scaling and providing fault tolerance for a server that just accepts requests is trivial these days, also, if your requests do computationally intensive stuff you are going to have a very bad time with Erlang.

It all depends on the problem domain.

Exactly, and the domain for Elixir/Erlang is way more niche and specialized than applicable domains of other languages.

I don't have anything against Elixir but part of its crowd just advertises it as the best thing for everything.


The Erlang VM (modified) has supported over 2M concurrent connections: https://blog.whatsapp.com/196/1-million-is-so-2011?

Furthermore, assuming each request is mapped to an Erlang process, each of them get their own VM -- no stop-the-world.


Correction: I meant to say each of them get their own GC, not VM.


Just to add another data point, I've seen crystal do over 100k requests per second on a single desktop core.


LFE (Lisp Flavored Erlang) is a great alternative to Elixir if you prefer Lisp over Ruby syntax. Pony lang is looking to enter the BEAM/OTP arena with its own implementation of supervisors and such. It is actor-based, OOP and is supposedly very fast in the distributed niche.


Since I haven't had use cases for the BEAM/OTP, LFE is definitely what's motivating me to finally dive into the whole erlang ecoystem.


Okay so the given example where on my machine elixir does the calculation of the the fibonacci number in 12.25s but if you add in HiPE and compile to native code in that module it's much closer to crystal than you might expect at 3.34s.

Not bad really for a language that's meant to be slow at computational stuff :^)

@compile [:native, {:hipe, [:verbose, :o3]}]


Hm, what about HiPE and ability to natively compile Erlang modules? That's pretty solid technology by now (I think?) and it promised some good results last time I checked. Honestly asking - I did quite a few things with Erlang and a little less with Elixir, but never even tried to use them for raw speed, instead delegating number crunching to other processes via ports.


Oops, you're right that Elixir is not a computationally fast language. If you're looking for fast, raw number-crunching, look elsewhere. That said, real-world performance tends to be extremely good for real-time and networked applications (read: web apps). It's common for requests to be handled in microseconds, even in development.


Is Elixir really fast for web apps though? According to the web framework benchmark [1] the performance of Elixir is pretty bad - it's consistently slower than python and ruby frameworks.

[1] https://www.techempower.com/benchmarks/#section=data-r12&hw=...


Chris McCord said the following on reddit after the results came out:

"We don't know what caused the errors and unfortunately we didn't have a chance to collaborate with them on a true run. A few months ago they added Phoenix in a preview, but it was a very poor implementation. They were testing JSON benchmarks through the :browser pipeline, complete with crsf token generation. They had a dev DB pool size of 10, where other frameworks were given of pool size of 100. And they also had heavy IO logging, where other frameworks did no logging. We sent a PR to address these issues, and I was hoping to see true results in the latest runs, but no preview was provided this time and we weren't able to work with them on the errors. tldr; these results are not representative of the framework."

From my personal experience going from Rails/Sinatra to Phoenix it feels a lot faster but I haven't done any benchmarks so take that with a grain of salt.


Erlang/Elixir is much, much faster than either Python or Ruby especially when it comes to workloads like your typical web app.

In the Techempower benchmarks the Phoenix tests had a ton of errors and there was no preview run so whoever submitted them wasn't able to fix them. Look at the error column. I assume they'll be fixed in the next run.


Error isn't why Elixir performed poorly in the web framework benchmark.

Take a look at the "Single Query" test [1]. The entry for Elli has zero error and is still slower than many Python & Ruby frameworks.

[1]: https://www.techempower.com/benchmarks/#section=data-r12&hw=...


Honestly that site's benchmarks have never been accurate for me in production. But that's how benchmarks work I guess - my bad for bringing up benchmarks! My experience with using Elixir/Phoenix for web applications has been extremely positive performance-wise however, and has surpassed or matched Go, Ruby, PHP, and Python consistently in all production scenarios. For the latter three, the difference has been order-of-magnitude.

It depends on the problem domain as always. If you're doing a lot of naïve single-threaded number-crunching, have fun. But Elixir/Phoenix haven't failed me for web applications, even in very intensive situations. It's the first time I've barely had to do any tuning beyond external factors such as network and database queries (which, by the way, Phoenix's Ecto handles very gracefully and explicitly).

This is my experience with every Elixir/Phoenix app I've worked on thus far. I apologize if I made it sound like some sort of universal truth.


It is hard to say how they measure and what they measure. According to their "multiple queries" benchmark, which I guess is the real world one? (Unless everyone expects all custumers to line up and send their requests one after another one).

https://www.techempower.com/benchmarks/#section=data-r12&hw=...

we should all be using Dart on the server coupled with MongoDB perhaps.


The Dart vm is pretty damn fast, way faster for any CPU bound stuff than the BEAM.


http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...

as-in FAQ 1.4 "What sort of problems is Erlang not particularly suitable for? … The most common class of 'less suitable' problems is characterised by performance being a prime requirement and constant-factors having a large effect on performance. Typical examples are image processing, signal processing, sorting large volumes of data and low-level protocol termination."


Faster than C++, Java, Go?


I'm not sure why you're grouping those four languages as they overlap very little. Go has latency and emphasizes network bound services (and randomly docker for some reason, I'm sure a good one); rust is a c replacement; swift is for writing iOS/Mac apps; and crystal is a newborn. I'd actually call it quite close to go: a high emphasis on concurrency and services. Single thread performance matters much less when scaling horizontally is mandatory to smooth latency spikes, and I'm betting on io bound work erlang would be competitive with go.

I've never used elixir but I assume it has a similar performance profile to erlang as it shares the vm.


> swift is for writing iOS/Mac apps

I think that's a little simplistic. It's true that it's early days for cross platform Swift but clear progress is being made:

1. https://swift.org/download/

2. https://github.com/SwiftAndroid

3. https://github.com/tinysun212/swift-windows

4. https://swiftforwindows.codeplex.com/

Better cross platform support is a goal of Swift 3.0:

https://github.com/apple/swift-evolution


I actually had assumed it was cross platform but there was no reason to use it.

I certainly wouldn't invest my code anywhere near Apple unless that was also my market. Who knows what direction it's moving, aside from in Apple's interest. I'll stick with rust and go: between the two I get everything but easy objective c interop.

Plus, their design decisions with respect to null ability is... Interesting. It's gonna feel gimped by legacy needs for a long time.


I listed a few examples of emerging languages to clarify that Elixir is not in the same class when it comes to computational performance on a single machine. The performance profile is indeed Erlang-like.

Keep also in mind that my reply was within the context of a thread on Crystal. OP sort of sold Elixir as a fast language that we can use now while we wait for Crystal to mature.

My point is not that Elixir is useless. My point is that we must not oversell Elixir as a fast language. Generally speaking, it isn't. It excels at horizontal scaling, which is great, but I wouldn't call it "fast" without proper qualifiers.


Yup, this makes sense. Sorry for the confusion.


Elixir gets plugged so often in other-language threads - whether it's Julia or Ruby or, like here, Crystal - that if it wasn't FOSS I'd have decided it's being astroturfed.

I guess it's a good thing that people like it so much, but it's really starting to feel marketing-y by now.


> that if it wasn't FOSS I'd have decided it's being astroturfed.

That's a good sign!

You know why? Because it has a great community and is very friendly for new comers. Jose, Eric and the rest of the team made that a priority and it shows. It doesn't just mean being nice on IRC, it also means putting usability first, putting more effort in how example looks, how documentation looks and so on.

If Google invented a language then proceed to push and sponsor it, by paying authors to work on it, organizing marketing, hackathons etc, then it is hard to say if it popular because of Google's backing or because it has its own merits.


"Friendly for newcomers"? I tried to write something the other day, and had an Erlang developer friend to help me. The Elixir docs section just says "buy one of these books to get started", which is completely unacceptable, and we (mostly my friend, as I had no idea what anything is) spent an hour trying to figure out how to run a node, with Google not providing any useful answers.

That's as hostile to newcomers as it gets. Contrast this with the Rust book, that gets you from "I have no idea how anything works" to "hey I just wrote a small program!" in a few minutes.


Elixir has a very good getting started guide[0]. I don't know if you have seen this or not. Even in the learning section, it recommends going through the getting started guide. It mentions other books in other resources. I don't know why you think it's hostile to new comers.

I learned Elixir totally from Getting started guide and then the documentation. Then for OTP, I have read an Erlang book to understand it well. Elixir's documentation is really awesome and one of the best I have seen.

[0] - http://elixir-lang.org/getting-started/introduction.html


Jesus... I saw that page. What I didn't see is the "next" link at the very bottom, right above the footer :( There's a whole bunch of stuff I haven't read! That page should be better designed... I think I also skipped the sentence below "running scripts", the one that says "chapter 2", because I skimmed later and it was just "here's where you can ask questions".


I have no involvement in Erlang, but I just looked at that page, and as an outsider I would agree that the navigation could certainly be improved. I think a lot of the problem for me is that there is no distinction between the numbers for "Chapters" and for "Sections", and this confused me as to where I was in the progression.

Maybe using numbers for chapters, letters for sections, and a 1.A notation for the headers? At the least, adding the chapter numbers to the header, so it says "1. Introduction"? Putting the chapter numbers in the URL would help too. So would adding some highlighting in the right column index to indicate the location of the current page.

It seems like fantastic introductory material, but only if people can find it. Usually, the first thing do when I encounter a paginated manual like that is to search for a "single page" or "print" or "PDF" link. Is there one there that I couldn't find? If not, adding one might be a simple (partial) fix.

Edit: If others have more (and better) suggestions, they can be added to the issue I filed here: https://github.com/elixir-lang/elixir-lang.github.com/issues...


Oh sorry to hear that Stavros. Sometimes Erlang knowledge doesn't translate directly to Elixir, there are just a few conventions to keep in mind.

I've used these before when playing with it:

* http://elixir-lang.org/getting-started/introduction.html

* https://howistart.org/posts/elixir/1

* Asking on #elixir-lang on freenode IRC

Here is a specific chapter on distribution and nodes:

http://elixir-lang.org/getting-started/mix-otp/distributed-t...

Their IRC channel, they were pretty helpful for me. And also I am sure they'd want to hear about your experience to also improve their docs and tutorials.


Thanks, the problem was that I didn't notice the rest of the pages on the guide :( I thought there was only the "installation" page.


Elixir has good starting guide. I was able to easily pick up reading from it. Book are further resources to learn.


Thanks, I seem to have missed the fact that there are multiple pages on that guide. Weird, because I saw that page again just now and thought "how is two paragraphs a good guide?" before noticing the contents on the side...


Yeah, I think Dart is failing and Go is succeeding because of Google backing.


I have similar feelings about it, but I have chalked them up to the web-dev crowd where there are more numbers of Ruby/ROR devs flocking to it. It has the same uptake as Ruby/ROR did at certain times.

Case in point, LFE (Lisp Flavored Erlang) was created by one of the original designers of Erlang, Robert Virding, has great support for a small FOSS project, true macros, but the popularity of Ruby has rocketed Elixir way ahead in terms of repositories and users. Erlang Solutions has it on the site, but it is not as touted as Elixir. People go with what they know, and let's admit it, Lisp is a great language, but not as popular in the web-dev crowd sans Clojure (which I don't see as so Lispy).

From the early looks of it, having come from industry and academia, Pony lang looks poised to muscle in on Erlang/BEAM/OTP, Elixir and LFE anyway. I personally don't like the syntax, but syntax is not semantics, and you get over it.

Popularity doesn't always win the day if you do something a bit more off the main road, and potential to earn more researching what you love: Look at qdb/k devs and jobs, and Haskell has started increasing in uptake by fintech. Go with what you like, or as Joseph Campbell said, 'Follow your bliss' and the rest will fall into place.

But don't listen to me. I spend many waking moments fiddling with J (jsoftware.com). Not actually the most loved or known PL out there. I think the array languages J/APL/K/Q will have their day due to where software and hardware are heading: Multicores, array processing (GPU/FPGA hybrids, custom computers).


> Case in point, LFE (Lisp Flavored Erlang) was created by one of the original designers of Erlang, Robert Virding, has great support for a small FOSS project, true macros

LFE had no documentation, no tools, no learning resources for a really long time. Compare that to Elixir that focused on those aspects since day one. Furthermore, LFE has reached 1.0 only recently, almost 1.5 years after Elixir, and that has an impact on industry adoption.

LFE also was, for a long time, literally Lisp-flavored Erlang while Elixir attempted from the beginning to bring its own abstractions such as protocols, collections, Unicode support, UTF-8 strings, and they are still pushing it forward: http://elixir-lang.org/blog/2016/07/14/announcing-genstage/

So I think you are selling both languages short. There is much more happening in Elixir besides the "popularity of Ruby" and there is a lot of potential in LFE now that they are focusing on being more than a "lispy" Erlang.


Not sure why people would discuss it in Julia threads,but for Ruby and Crystal >50% of Elixir community came from Ruby so basically it's Ruby devs discussing newer alternatives to Ruby.


Perhaps it's time to pick a new language to drop in every thread. Something hip and cool, like HyperTalk!


I don't have a CS background, so I can't speak much to the performance of Elixir and Crystal.

As someone who has made the transition from Ruby to Elixir, I'm really, really intrigued by Crystal.

Though, upon a cursory look into the Crystal docs and community, a couple things are clear...

Elixir killed it on all the things surrounding the language.

- docs

- testing

- Slack chan, IRC, mailing list

- package manager (Hex)

- build tool (Mix)

- web framework (plug & Phoenix)

- books from major publishers (manning, pragprog, etc.)

- ElixirConf

- ancillary teaching (ElixirSips, LearnElixir, Elixir Fountain, etc.)

While the language may not be as computationally performant as some of others mentioned, all the things above lower the barrier to entry for adoption and make Elixir a more attractive language than some of the counterparts. And it's amazing that a language this young has nailed it on these fronts.


I code in Ruby for about 4 years now, and Elixir still doesn't seem easy for me to get started with.

Crystal, on the other hand, is as if I'm writing Ruby.


The biggest practical downside with Elixir relative to Crystal is lack of types. The per-thread GC is an important advantage, though.


With pattern matching [1] and guards [2] you often don't miss explicit types.

[1] http://elixir-lang.org/getting-started/pattern-matching.html

[2] http://elixir-lang.org/getting-started/case-cond-and-if.html...


Elixir has optional typing though. You can type check with Erlang's Dialyzer.


Dialyzer uses success types, which don't always let you know when you have a soundness problem. The only thing you can count on is that if it does cry out, there certainly is something you need to fix. It also lacks parametric polymorphism, which means you often lose a great deal of useful type information. The theory behind Dialyzer is impressive, but I was pretty disappointed with it in practice.


You didn't enable the right flags.


What flags?


Try using Wunderspecs, Woverspecs, and Wspecdiffs. You'll find that dialyzer catches things that are more the shape of what you'd accept a more typical type-checking system to catch.


Oh great, thanks! Unfortunate that I missed it. Still no luck on the parametric polymorphism though... :(


That's what Type Variables are explicitly for. They present the necessary semantics for bounded parametric polymorphism. In fact that's really the only reason they exist at all. The rest of the sub-type system works without them, but parametric polymorphism wouldn't work without them.

I don't know if the Erlang/Dialyzer docs cover that specifically, but I know the originial paper on Dialyzer and Dialyzer type specs does.


I know you can do stuff like:

    @type pair(t, u) :: {t, u}
    @type result(t, e) :: {:ok, t} | {:error, e}
I'm more talking about type parameters on functions, and also constraining by behaviors and protocols. For example the typespec for Elixir's Stream.map/2 (http://elixir-lang.org/docs/stable/elixir/Stream.html#map/2):

    @spec map(Enumerable.t, (element -> any)) :: Enumerable.t
Note all the other similarly unspecific type definitions. In a parametrically polmorphic lang, you could do something like:

    @spec map[e1: Enumerable, e2: Enumerable](e1.t, (e1.element -> e2.element)) :: e2.t
And perhaps even intersections of behaviours and protocols:

    @spec foo[a: Eq & Ord](a.t, a.t) :: bool
('scuse the Elixir syntax...)


Is Elixir actually any more mainstream/momentum-having than Crystal? I'd be more inclined to point people to e.g. OCaml.


Crystal aims to replace Ruby but at Railsconf and Ruby meetups I go to Elixir/Pheonix is mentioned 100x more than Crystal is.


I don't think it aims to replace ruby (they don't even claim any kind of "compatibility" level with ruby, more like "inspired by ruby" I would think), but...I wish it would replace it :)


I don't know if Elixir is mainstream or not, but there are a lot of companies using Elixir in production (including the company I work for). You can find the list of companies at https://github.com/doomspork/elixir-companies


Looks like Play! took #2. I can attest to it's efficiency. Just yesterday one of our mobile apps went nuts with repeating requests, and rps shot to 53K for a few minutes, but no one noticed, as our API max response time didn't break 15ms.

JVM+FP FTW.


Apparently it entered the TIOBE list recently, so perhaps it will gain some momentum.


I'd like to put out there that Crystal is absolutely awesome. The language itself is Crystal clear, but the language documentation and API documentation - oh my!

I had never worked with compiled languages before I tried Crystal, but had always had a huge interest in getting into that. When I wanted to learn the compiled ecosystem I looked at languages like Go and Rust, but the learning curve for those was a bit overwhelming for a newbie. A while later I found Crystal, and much thanks to the simple syntax of the language I learned a ton of new things about compiled languages very quickly. The absolutely best part of the language is that it is written in plain Crystal, and I've been looking at their own implementations for various things a lot - something I've never done before, having worked mostly with Node, Lua and PHP before.

Nowadays I can delve into Go documentation, packages are clear to me and I just understand how things should and should not be implemented to achieve a good efficiency and performance level. The Little Go Book makes sense, the I/O package is simple and this is probably all thanks to the syntax of Crystal, the amazing language & standard library documentation but most importantly the source of Crystal being written in Crystal.

I'm currently working on building a business using Go, because I absolutely need Windows target support - something which Crystal does not yet have. But the second it gets that, I'm moving back. Don't get me wrong, Go is really great and nice to work with - but Crystal is my mentor. Please note that I have not worked with Ruby before, so the whole language was new to me.

To summarize; even if you only wish to learn, Crystal is in my personal opinion the best choice to go with.


Maybe the makers of Crystal need to take a leaf out of Go's book. Despite the Go creators not being windows users (AFAIK), they support Windows as a primary target, to help adoption, I assume.

Of course the Crystal people probably don't have the same number of developers working on it as Go did even early on.

Edit: Go took a little while to support windows, not until around July 2010. See this question from November 2009: http://stackoverflow.com/questions/1717652/can-go-compiler-b...


The plan is to support windows before 1.0, although the core devs are reluctant to "open up" to the large amount of developers windows support would bring before they are finished making breaking changes.


I don't think they're afraid of it, more like haven't gotten around to it would be my guess...


Crystal supported Windows (albeit unofficially, and it was an epic adventure to compile it, and I've never tried running any serious code on it), however all support practically ceased when it switched from libpcl to inline ASM for coroutines.


Windows support was implemented by the community, Not the creators.


not to take away from the point you're trying to make, but i just installed it on windows 10 with the latest update that included bash. felt nice.


For the ones who started to wonder about the Windows support, it is tracked here:

https://github.com/crystal-lang/crystal/issues/26


In my experience, Golang's Windows support is actually fairly poor. I think it is largely to do with the clunky interface that is CGo. I found Rust to be less "warty" by a large margin with good library support in a lot of areas. Your mileage may vary, of course.


This is not what I experienced so far. I use Golang on both Linux and Windows (50/50) and I didn't have any problems. What are you referring to exactly?


As I said, YMMV but the biggest standout that I remember was trying to play sounds using the windows API. There weren't any packages our team could find that really did it correctly, the one that everyone pointed to leaked memory like crazy and it was hard for us to track down since it relied on bouncing back and forth between C and Go so frequently (making malloc/free lifetimes difficult to follow because C can't know what things have been GC'd by the Go runtime)


sounds more like the ecosystem for windows support is slow. Or were there some stdlib's useful for it in go on linux?


> Golang's Windows support is actually fairly poor.

It does exist unlike Crystal, so you should explain what you mean by poor. I never had a problem compiling a project with CGo on Windows.


You mention learning a lot about compiled languages, and mention afterwards being able to understand packages. How are those two related?


I meant the packages of the stdlib, not community-made packages. A great example is the things I learned about the IO module and how it plays with the other modules in the stdlib for Crystal that I could easily see a similarity in behavior and methods in the Go implementation, but not being able to fully understand before I had a go with Crystal.

The way they interact with different types, fibers and allocation on the stack vs heap, etc. Makes sense?

Edit: To give you an example of how friendly the Crystal lang & API documentation is to developers unfamilliar with the language, let's look at the Iterator: https://crystal-lang.org/api/0.18.7/Iterator.html

It comes with a great "introduction" to what it is, what it does and gives an example of the advantages it has over the Enumerable. It also explains how you can implement your own Iterator.

We can also look at the IO module for Crystal and the io package for Go: https://crystal-lang.org/api/0.18.7/IO.html https://golang.org/pkg/io/

From a beginners standpoint, you have to see that the Crystal documentation is way more friendlier.


The Fibonacci comparison is a poor example of performance gains because Ruby is using large number data types to ensure the correct result. This is according to the crystal language website itself.

"However, Crystal might give incorrect results, while Ruby makes sure to always give the correct result."

https://crystal-lang.org/2016/07/15/fibonacci-benchmark.html


That is true. Ruby does a lot in the background and one of the reason why it is slow


i actually think it's a good example for just that reason. Ruby will automatically use those big number types on production applications as well as the fibonacci application, so it's demonstrating that difference and the effect that it has.


This isn't true. Ruby uses native integers when possible and then switches to bignums when the integer gets big enough.


Yeah and that behavior of automatically switching has overhead, and should be taken into account when comparing languages.


I recently gave a quick 10 minute talk on Crystal if anyone wants a video introduction https://www.youtube.com/watch?v=oC9IknG40po


Thanks! Interesting talk!


First thing I look for whenever I run across another "C competitor" is to look which language it is implemented in. Usually that's C, or C++, but this time, it does look like Crystal is implemented in Crystal, which is, to me, a very good indication that this is a "real" system language.

I believe Rust is also implemented in Rust and Go, after a few years of being implemented in C has now a compiler written in Go.


> ... which is, to me, a very good indication that this is a "real" system language.

Could you expound upon that point a bit? What's the difference, in your mind?


The rationale is, that once you have a programming language that is implemented in itself (usually that means that the compiler/interpreter is written in that language), then it means that your language has tackled and can deal with many issues that "system languages" deal with. Mainly dealing with the OS in a lower level, dealing with CPU/RAM, etc.

This is a large problem space that you can glean over by using C as a layer of interaction between your language and the underlying machine, but it makes your language a: not truly a "system language" and b: it also ties you to C philosophy, API/ABI, calling conventions and so on.


>Go, after a few years of being implemented in C has now a compiler written in Go.

Since 1.5 came out, a year ago.


I've done some coding Crystal when I have Ruby scripts I really need to run faster. I generally see about a 5x performance over Ruby.

It certainly is great to be able to jump right into Crystal coming from Ruby. It isn't very hard to convert most Ruby code to Crystal -- you just have to go through and "typify" everything. A few methods have different names and of course some don't exist but most of it is there.

My one grip with Crystal however, and why I haven't adopted it more generally, is that much of the "Lisp-like" features of Ruby are all but lost. Crystal makes up for some of this with macros, but it doesn't quite cut it. For example, you can't splat an array into a lambda in Crystal. Arguments have to be tuples which are compile-time bound. Little things like this feel very limiting to an experienced Ruby developer.


You may want to revisit that splat issue. Asterite did some changes regarding splats a few months back that was pretty impressive.


Cool. I'll have to see if I can get my Clik library working now. Thanks.


Crystal doesn't support a REPL yet because inferred static typing complicates incremental compilation.

Blog: https://crystal-lang.org/2014/12/06/another-language.html Github Issue: https://github.com/crystal-lang/crystal/issues/681


`crystal play` is a pretty good replacement for a REPL.


Scala has type inference with strong static typing, and a useful REPL. A Crystal REPL should be doable, unless Crystal's creators made some bad choices in the design of the type system that Scala's creators did not.


My intuition is also that a Crystal REPL is technically possible. But FTR, the two languages implement subtly different type restrictions.

Scala has type inference, Crystal has optional typing. In Scala, there are certain situations when the type is discernible by the compiler, and can be omitted. For example

  val x = 1 + 2 + 3
the compiler infers that x is an Integer. However, omitting type information in Scala is the exception not the rule. Methods and functions, for example, must have type annotations.

In practice, Crystal also infers type. But in Crystal you can omit almost any type annotation, including method and function definition. This probably poses a different challenge for the compiler authors. The Type Restrictions sections provides some more examples https://crystal-lang.org/docs/syntax_and_semantics/type_rest...



Was wondering how many companies actually use Crystal in production.

I was interested in Crystal but the lack of apps using it in production and proof of concept on the field is making me doubt its usefulness.


I'm the author of Kemal(kemalcr.com), a simple, fast and modern web framework for Crystal.

We've been using Crystal in production(at Protel) for more than 6 months for some heavy load APIs (100-200 req/s). We've replaced our Rails API with 64 unicorns to just 1 Kemal process and it's not even breaking any sweat while consuming nearly 100x less resource and 30x less CPU.

You can ask me about our experience.


My biggest concern if i were to try to use Crystal in production would be the lack of "googleability". I'm guessing your team is very familiar with the language, so it's not as much of a problem that you can't Google a problem when it happens. Do you feel that this is a major roadblock for people new to crystal, or are the majority of errors very intuitive? the second thing is that according to the site crystal is in alpha and making breaking changes sometimes. how often do you have to refactor after an update and how difficult has it been to spot and fix that something is broken by an upgrade? Very interested in using it myself but can't bring myself to commit an app to it.


These are really nice questions / concerns.

- Firstly, we are a Ruby shop and most of the time the errors are pretty self descriptive that we can easily solve. Also the IRC/Gitter room is full of helpful people that you can get instant feedback about any issue (even a compiler bug). Note that we're not afraid of writing code :)

- I've updated our application once in the last 6 months. It's only a minor change for adding type annotations which took like 10 minutes.

- Actually i think that Crystal is already in beta quality (for us). Calling it Alpha is really an understatement.


Thanks for answering!

I feel like Crystal is one of the best projects out there but it's not getting much attention. What do you think it will take to get people to start writing projects and contributing?

IMO the number one thing will be to start getting as many articles like this out as possible, and go beyond that. TodoMVC/Yelp Clone tutorials would go a LONG way I think, especially since a lot of people will just fork those projects to mess around in them.


Chiming in re. "googleability", go has the same problem (despite coming from Google :D). What you do is always search for 'crystal lang xxx'. It will get better as adoption increases.


I was more just talking about how as language popularity goes up so does your likelihood of finding the solution to any given problem on the internet.


I think OP isn't referring to the name but the fact that there are no results.


Just to add on that breaking changes are always compile errors, with a nice description of what you need to do to fix it, which makes the breaking changes a lot more manageable.


That's good! It's always frustrating trying to find a breaking change in an interpreted language, so I can imagine it would be much easier in Crystal.


googleability: imagine kemal was called meth.


1. Has the lack of multithreading been a problem in any way? 2. Do you do the standard deployment of putting nginx/haproxy in front of a load of processes? 3. Have you been able reuse any Ruby code (including gems) or do they have to be Crystal specific.


1. most ruby code can't be used in crystal, but is trivial to port. 2. Crystal doesn't use gems (because of 1). it uses shards.

Lack of multithreading is an issue for _ME_ because i'm working on an app that could really benefit from it. Just because Ruby is used almost exclusively for webapps doesn't mean Crystal is. Also, the "load another copy of the entire app into memory" that Rails people use is a really crappy way of doing "multithreading" .


1. It hasn't been a problem for us. 2. We put Nginx in front of Kemal and it's working like a charm :) 3. Well we haven't tried but it's pretty simple with sidekiq.cr https://github.com/mperham/sidekiq.cr


@sdogruyol

Sure, thanks. Would love to hear that.

A blog post would probably be more appropriate since it will have a wider audience and will be good for Crystal and its community.


I don't think Crystal is production ready, the language is still evolving and not stable at all, from minor version to the next.


And it already got a port of Sidekiq: https://github.com/mperham/sidekiq.cr


Whaaat? That's awesome. Now I want to see how Crystal handles ETL pipelines.


I've found the bottleneck to be the database choking on too many INSERTs, not the code doing the writes (our Sidekiq job)


For anyone interested in building web applications with Crystal i recently gave a talk about Kemal at PolyConf https://www.youtube.com/watch?v=KJB-nAoRSr8


scoped includes are a deal breaker for me when looking at new programming languages.

ex: non-scoped (everything in foo is added to the global scope)

  inport foo
  {
    bar.do()
  }
ex: scoped (everything in foo is added to the local scope, and assigned a name-space)

  {
    bar = inport foo
    bar.do()
  }
  
I find it much easier to manage programs where there are no "hidden" global variables. It's especially hard when the included files also can include files, witch all adds to the global scope.


bring it up on the mailing list, seems like an interesting feature to me :)


So, this is like Ruby's answer to cython?


I don't know why you are getting down voted - the example of Crystal running an existing Ruby file makes it looks like it could be a compiler for unmodified Ruby files. It isn't - it's a separate language - but at first glance that might not be clear.


No, not in any way whatsoever.


Cython looks like typed Python. Crystal looks like typed Ruby. Both are compiled, and both are not quite as fast as C, but much faster than the language they're based off of. The point of both is to be able to write something that looks like a clean scripting language while getting close to the speed of C. Was that unclear, or are you just trolling?


cool.

can you make it faster than C though please? (seriously) i think it might even happen by accident in some cases already though. the places where C can be beaten for performance are, in my experience, from design choices in the C standards, users not understanding or leveraging those things for performance and the architecture of the compilation-unit/link process.

things like the struct layout rules - instead of the compiler organising things to be optimal it follows those rules for memory layout, or the calling conventions - you often have to use funky extensions to get efficient function calls.

other things are the lack of ability to hint the compiler that e.g. mathematical structures underly types that can be leveraged for optimisation. that const or functional purity can be trusted... etc.


Not the answer you would expect but still: it can also happen on large projects where the refactoring or the paradigm change is just too costly in C. The pure, raw ability to make C faster is nice, but if it takes you months of development you don't have, it's pretty useless.

One typical example of this was a few years ago (if I'm not mistaken) in the monitoring world, when Shinken released a Nagios-compatible engine in Python, and, basically the reactions in the Nagios community was that the modifications involved in Nagios (C) were just too important to be worth it.


Why couldn't Swift be this?


In other words, if Apple had taken the RubyMotion path, but added type inference and better performance.


The really important questions in any modern language:

(0) Does Crystal have a lot of undefined behavior like C?

(1) Does understanding Crystal programs require a lot of trial and error just like in Ruby?

(2) How good a job does Crystal do at preventing me from shooting myself in the foot?

A language isn't to be judged just by the amazing programs you can write in it. (Turing-completeness and I/O facilities have that covered.) Far more important are the totally stupid programs that you can't write in it.


You could've answered those questions yourself if you actually looked into the language, but you didn't. You instead offered facile and superficial judgement. If you have complaints, state them. Speaking in generalizations just sounds like whining.

To be clear, I have no strong opinions about Crystal and will probably never use it. But comments like yours are simply grandstanding and it's annoying that they are confused for contribution.


I'd love for you to elaborate on (2). I don't think that's a criticism of Ruby I've heard before.


(2) wasn't particularly aimed at Ruby, although some “creative” uses of metaprogramming can be very hard to debug. But languages designed for systems programming typically have features that, when used wrong, result in totally catastrophic modes of failure.


Well, you don't have runtime eval or send in Crystal, so some variants of metaprogramming can be ruled out.

However, there are compile time versions of method_missing, delegates and instance_exec available (usually with slightly different naming due to not having the exact same semantics as the ruby counterparts), so it is still possible to do some magic.


Is undefined behavior a thing outside of C? Is that really something you should question rather then just assume that modern language have no undefined behavior?


What happens if you try to mutate a shared object from two threads, without using mutexes and locks, in languages without either Rust's compiler-enforced ownership or a global interpreter lock?


Thats not really what peole refer to in C when they talk about undefined behaviour.


ConcurrentModificationException?


Java's collections don't (and can't, without high cost) make any guarantees that they will throw such an exception whenever they are modified concurrently... for one, data races can have weird consequences that aren't easy to detect. On that point, data races in Java are not undefined behaviour, they just have very weak guarantees about what happens (basically isolated loads and stores do reasonable things, and no fancier operations are available) which is another alternative to the Rust approach and the GIL approach.


> they just have very weak guarantees about what happens (...) which is another alternative

Not a very usable one. It means that atomicity is something you have to fight very hard to actually get, and very easy to accidentally destroy.


That's hardly any better than undefined behavior. It's not the program you want to write, under any possible circumstance, and the language should tell you so.


It is significantly better than undefined behaviour, as it results in a noisy failure rather than silently incorrect results. Compile-time failures are best, sure, but I'll always take an exception over what is essentially data corruption.


> It is significantly better than undefined behaviour, as it results in a noisy failure rather than silently incorrect results.

This might be true in a deterministic setting, since the likelihood that a test suite will find the error is very high. But in a non-deterministic concurrent setting, throwing an exception that might be only caught once in a blue moon is just as bad as not doing anything about errors.


An occasional exception and silent corruption the rest of the time is still a lot better than silent corruption all the time - it's much easier to notice in the first place, and as long as you have one instance of the exception you have a stack trace to start from. And in practice the JVM/standard library throws ConcurrentModificationExecption pretty reliably.


> An occasional exception and silent corruption the rest of the time is still a lot better than silent corruption all the time

The fundamental problem still remains, that in neither case can I use the language's semantics to guide the design of my program.

> and as long as you have one instance of the exception you have a stack trace to start from

Oh, really? How am I supposed to find what the problem is in the general case, from just a single exception stack trace that I might or might not get?


The stack trace tells you where one of the concurrent accesses happened. As opposed to data corruption where (if you even manage to witness it in the first place) all you know is that one of the things that accesses that memory got it wrong.


I think a stack trace and an exception would help most people with a modicum of debugging skills.


> I think a stack trace and an exception would help most people with a modicum of debugging skills.

What if the error is difficult to reproduce in the first place? This isn't too uncommon in concurrent programming.


You can look at the code, see why you can't prove it correct, or work backward to see how the code could have gotten to such a state. Like I said, debugging skills.


> You can look at the code,

Of course I can look at the code. Trust me the reason why I introduced the bug wasn't because I was looking at something else at the moment.

> see why you can't prove it correct,

Realistically, this is because the language and the program's design conspire to make proving anything about the program an uphill battle. If the language could perform basic sanity checks (e.g., no attempting to use objects after ownership has been transferred to someone else), then at least I could have a fighting change to manually prove more interesting properties.

> or work backward to see how the code could have gotten to such a state.

Doing this on a per case basis is an incredibly mind-numbing task.


Corrupting data with no warning or visible signs is far, far worse than crashing the application.


Data corruption is possible even in memory-safe languages, anyway. For instance, invariants of data structures might be violated.

If you're really serious about data integrity, ask yourself - does any language even attempt to describe data structures stored on disk?


Pedantically, it's defined behavior, and doesn't use an interpreter lock.

Granted, I'm completely head over heels for Rust, and I agree completely that ConcurrentModificationException is a crappy answer, but it is defined behavior (AFAIK).


It's not defined. The library/VM is very good at throwing ConcurrentModificationException in practice, but it's specified as a best-effort thing, not to be relied upon.


It's true that throwing exceptions is defined behavior, but it's not defined in a way you'd actually want to use. For your actual purposes, it's undefined.


??? Ask any C programmer if they could wave their magic wand and turn every single undefined behavior in their programs into a segfault how much better their life would be. It's not a small improvement it's a huge improvement.


Thats simply not true.

UB is simply the latest stick to hit C with. In day-to-day working nobody worries about UB at all as you generally don't notice it.

Same with lack of a GC; this is a plus point for C for most applications, not a negative.

(and yes, I do have plenty of experience in it, I've been using it for the past 20 years, have you?).


> UB is simply the latest stick to hit C with. In day-to-day working nobody worries about UB at all as you generally don't notice it.

I agree that in day-to-day working nobody worries about it, and I see odd behaviors all the time because of it. In particular as fewer applications are using C, a much higher fraction of C becomes systems and embedded where you are more likely to accidentally run afoul of choices that were made to compete with FORTRAN on numerical performance.

A read from NULL will crash on most unixen, but will not crash on some targets without an MPU and when running in kernel mode, so may be left lurking (see the linux kernel).

The C89 aliasing rules in particular are completely at odds with a lot of kernel and device driver code, and in addition where the int size was 2 bytes previously but is now 4 bytes you can have signed overflow where before the behavior was well defined:

    UINT2 x; // 16 bit integer
    ...
    x+=2;    // addition mod 2**16 on 16 bit targets, undefined behavior on 32 bit targets.
These are some real-world bugs I've dealt with.

> (and yes, I do have plenty of experience in it, I've been using it for the past 20 years, have you?).

I've been using it professionally for only about 15 years, but I started using C at home in '92.

[edit]

> Same with lack of a GC; this is a plus point for C for most applications, not a negative.

This is a bit of a non-sequitur, as I didn't mention memory management at all. C doesn't need a GC. It could use more memory safety though. There's been plenty of academic research on improving C's memory safety without significant runtime overhead; a lot of those techniques were used in rust. There are plenty of tools that can catch a large fraction of memory errors at compile time, which is a good thing.


> UB is simply the latest stick to hit C with.

Honest question, which is the case?

(0) You find it easy to determine, by visual inspection, whether a piece of code has undefined behavior.

(1) Your coding practices make it difficult to accidentally introduce undefined behavior in the first place.

> Same with lack of a GC; this is a plus point for C for most applications, not a negative.

Agreed. C addresses use cases for which GC (or any other feature requiring heavy runtime support) is simply unacceptable.


Its a combination of both.

The vast majority of all code you write will not invoke UB, most people tend to stick to an 'easy' subset of syntax, unlike say C++ where everyone uses a different subset of features making it in effect multiple languages.

A combination of testing the known edge cases, wraparound issues, size issues, static analysis and tooling means running into an example of UB is extremely rare in most cases.

It used to be that people used dynamic memory allocation to beat C with, but that is just a resource management issue. TBH, this is not rocket science. If you need dynamic memory allocation, you had damn well better know how to use it properly.

Its an example of laziness and people ignoring the machine.

Another example is performance; saying that a language comes within a factor of 2 of C's performance and therefore is fast is absolutely ridiculous. a factor of 2 is huge.

You have to remember that people who write C are dealing with machine-specifics day-in, day-out. we're bit-fiddling and writing MPU code and drivers, etc.

Basically, we're much more aware of the machine than higher-level softies, so what would normally be UB is actually DB in most cases, its defined by the compiler and hardware that we're intimately familiar with.

...and that isn't to say that you can't write high level abstracted code in C, the simplicity of the language lends itself to (properly) efficient implementation, not efficient in the sense of Java or Ruby ;o)


> The vast majority of all code you write will not invoke UB

That's a bit like saying “the vast majority of the haystack doesn't contain any needles”.

> most people tend to stick to an 'easy' subset of syntax

I'm not sure I understand what you mean. Undefined behavior has nothing to do with syntax. It's strictly a semantic issue.

> It used to be that people used dynamic memory allocation to beat C with, but that is just a resource management issue. TBH, this is not rocket science.

If I understand correctly, the objection isn't that it's rocket science, but rather that you get little help from your tools if you do it wrong. Memory debuggers will only tell you about memory management bugs that manifest themselves in a particular program run. If a bug will only manifest itself under conditions that are hard to replicate, you're out of luck.

Of course, none of this is an indictment of manual memory management per se, or suggests that garbage collection is a universally good solution. But manual memory management has usability issues, which fortunately being addressed in more modern language designs like Rust.

> If you need dynamic memory allocation, you had damn well better know how to use it properly.

Sure, but are better compile-time diagnostics too much to ask for? Notice that compile time diagnostics don't introduce any runtime performance penalty.

> Another example is performance; saying that a language comes within a factor of 2 of C's performance and therefore is fast is absolutely ridiculous. a factor of 2 is huge.

No disagreement here.

> so what would normally be UB is actually DB in most cases,

As far as I can tell, the trend among C and C++ compiler writers is to optimize programs very aggressively under the assumption that UB simply will never happen, rather than to turn UB into DB.

> its defined by the compiler and hardware that we're intimately familiar with.

Well, “works on this machine” isn't good enough for most of us.


You're correct.... the vast majority of the haystack doesn't contain any needles, thats the point. And you also know where the needles tend to be and stay away from that area.

I'm not implying that UB doesn't exist, simply saying that using C is a different mindset.

If you use C, you dont just use the language, you use the language, the toolchain and the machine, you're familiar with the whole stack, quite often down to the metal.

The point about memory management is that memory management is just case of the general problem, i.e. resource management. resource management is a skill you need to have if you're a softie and making it easier in one specific case (RAM) is not a generic solution. Better that you learn how to do it properly then apply that knowledge in all situations (files, RAM, power, etc). e.g. where is the GC-equivalent for power management? or file handles? its the same problem in a different domain.


> The point about memory management is that memory management is just case of the general problem, i.e. resource management.

Wholeheartedly agree. I'm aware that GC is no solution for this problem. But I'm not arguing in favor of GC - I'm arguing in favor or making manual resource management safer, for example, like Rust does. Resource management is every single bit as manual as in C - the only difference is that the compiler yells if you do it wrong.


> The vast majority of all code you write will not invoke UB, most people tend to stick to an 'easy' subset of syntax, unlike say C++ where everyone uses a different subset of features making it in effect multiple languages.

Even integer addition very easily leads to undefined behaviour.

> It used to be that people used dynamic memory allocation to beat C with, but that is just a resource management issue. TBH, this is not rocket science. If you need dynamic memory allocation, you had damn well better know how to use it properly.

If you're going to solve a quadratic equation you should damn well know how to do it properly, by completing the square. But once you know that you should use the formula, because it makes it a lot easier. If you complete the square every time out of pride, you're just wasting everyone's time.

> A combination of testing the known edge cases, wraparound issues, size issues, static analysis and tooling means running into an example of UB is extremely rare in most cases.

Sure. You can do enough work to eliminate it. Or you can use a language where you don't need to.

> Its an example of laziness and people ignoring the machine.

Laziness is one of the cardinal virtues of a programmer

> Another example is performance; saying that a language comes within a factor of 2 of C's performance and therefore is fast is absolutely ridiculous. a factor of 2 is huge.

A factor of 2 is irrelevant most of the time. If you're growing exponentially, a factor of 2 will let you put off the point where you have to start scaling out by maybe a few months. If you're not growing exponentially, you probably won't hit performance limits at all.

> Basically, we're much more aware of the machine than higher-level softies, so what would normally be UB is actually DB in most cases, its defined by the compiler and hardware that we're intimately familiar with.

Until the compiler adds new optimizations. Sure, if you're never going to upgrade the compiler maybe you can get away with C.

> ...and that isn't to say that you can't write high level abstracted code in C, the simplicity of the language lends itself to (properly) efficient implementation, not efficient in the sense of Java or Ruby ;o)

Without native tagged unions you won't get far up the abstraction ladder. You can write your own with macros sure, but they won't interoperate with anyone else's or any libraries you'd want to use.


(0) It does not. It has a GC, and all features you'll likely use are memory-safe. You can of course use unsafe methods and raw pointers, but those are documented that they are unsafe and are usually only used for writing bindings to a C library. Macros for example are evaluated and constructed at compile-time, so no undefined behaviour can come from those. Most methods have different behaviour. For example, one method may throw an exception, another return an error, and another simply return Nil. You know if you are using `hello!` `hello?` or `hello`, so no undefined behaviour there.

(1) Not at all, look at the source implementation of their language implementation. For example, the lexer: https://github.com/crystal-lang/crystal/blob/master/src/comp... Seems pretty clear to me.

(2) Not entirely sure what you mean since that is such a broad case, but as stated, Crystal stdlib is mostly safe.


> (0) It does not. It has a GC, and all features you'll likely use are memory-safe.

Nice.

> (1) Not at all, look at the source implementation of their language implementation. For example, the lexer: https://github.com/crystal-lang/crystal/blob/master/src/comp.... Seems pretty clear to me.

How am I supposed to learn the language's semantics from a lexer?

> (2) Not entirely sure what you mean since that is such a broad case, but as stated, Crystal stdlib is mostly safe.

Consider this use case: I spawn five fibers. Can I send the same mutable object to all five? If so, can they attempt to mutate the object without properly taking turns? (e.g., using a mutex)


> How am I supposed to learn the language's semantics from a lexer?

Not what I wanted to achieve, but the lexer contains some frequently used methods and is fairly simple and straight-forward, but if you want to learn the semantics why not just go to their docs? https://crystal-lang.org/docs/syntax_and_semantics/index.htm...

> Consider this use case: I spawn five fibers. Can I send the same mutable object to all five? If so, can they attempt to mutate the object without properly taking turns? (e.g., using a mutex)

Taken from the docs:

Crystal has Channels inspired by CSP[1]. They allow communicating data between fibers without sharing memory and without having to worry about locks, semaphores or other special structures.

...

Because at this moment there's only a single thread executing your code, accessing and modifying a global variable in different fibers will work just fine. However, once multiple threads (parallelism) is introduced in the language, it might break. That's why the recommended mechanism to communicate data is using channels and sending messages between them. Internally, a channel implements all the locking mechanisms to avoid data races, but from the outside you use them as communication primitives, so you (the user) don't have to use locks.

[1] - https://en.wikipedia.org/wiki/Communicating_sequential_proce...


> but if you want to learn the semantics why not just go to their docs? https://crystal-lang.org/docs/syntax_and_semantics/index.htm...

Yep, fair enough. I'm somewhat worried about how abstract classes work: https://crystal-lang.org/docs/syntax_and_semantics/virtual_a...

Apparently, Crystal can infer the methods of an abstract class from the methods of its subclasses. In the Animal/Dog/Cat example, what happens if, in a separate module, if define a Snake class that doesn't have a `talk` method? There are several possibilities, sadly all pretty bad:

(0) Does the type checker retroactively decide that not all Animals can talk?

(1) Does the type checker decide that Animal subclasses can't be defined in a separate module?

(2) Does the type checker decide that, if an Animal subclass is defined in a separate module, it must have all the common methods to all Animal subclasses defined in the same module as Animal?

(3) Is type checking not modular?

> That's why the recommended mechanism to communicate data is using channels and sending messages between them.

I'm asking about the errors that the language prevents, not the community's conventions.


> I'm asking about the errors that the language prevents, not the community's conventions.

My understanding is that, at the moment, the language doesn't have native threads yet. The design of multi-threading behavior and errors the language prevents is still being worked out.


> In the Animal/Dog/Cat example, what happens if, in a separate module, if define a Snake class that doesn't have a `talk` method? There are several possibilities, sadly all pretty bad

The abstract class defines which methods every class that inherits from it must define by using abstract methods. If you write a class that inherits from an abstract class, and you do not define a method that the abstract class says that you absolutely must define, the compiler will raise an error for you. You can inherit from classes between modules.

> I'm asking about the errors that the language prevents, not the community's conventions.

I haven't actually tried it, definitely something I'll check out another time. I understand your concern though!


> The abstract class defines which methods every class that inherits from it must define by using abstract methods.

In their example, the type checker can infer that Animal has a `talk` method, even if it's never explicitly defined:

    abstract class Animal
      # no talk method here!
    end
    
    class Dog < Animal
      def talk
        "Woof!"
      end
    end
    
    class Cat < Animal
      def talk
        "Miau"
      end
    end
    
    class Person
      getter pet
    
      def initialize(@name : String, @pet : Animal)
      end
    end
    
    john = Person.new "John", Dog.new
In their own words, “Now the code compiles:”

    john.pet.talk #=> "Woof!"
Now, what happens if, in a separate module, I define a Snake subclass of Animal, without a talk method? What happens in this corner case isn't documented anywhere.

Type system design is very serious business, and can't be done by mindless trial and error. When a type system has a safety hole, patching it is pretty much guaranteed to break other people's code.


I can't seem to reply to your reply to this, but the error message has nothing to do with where a class is defined.

You can't compile a part of your application in crystal, there are no linked crystal libraries. You compile your whole app, will all class definitions. The compiler then will have all the type information to know if a method is missing in a subclass, when that method is used from a parent class.


Well, that's even worse. Whole-program compilation has positive aspects to it, like enabling more aggressive optimizations, but it should be strictly opt-in.

For future reference: The easiest way to reply to a message that doesn't have a “reply” link below, is to click on the “X minutes ago” link above.


Thank you! I didn't know that easy way to reply.

The compiler does some incremental compilation for the generated object files, so compile times are kept relatively small. Other than that, I don't think it makes a big difference for a developer except saving compile times. On the other hand, in this way you can't have a compiled library conflict so you have to "clean and rebuild" when this happens.

For example in Ruby there's no such thing as pre-compiling a gem, and every time you run an app the whole app is "analyzed" (parsed and converted to VM instructions), and so you can think in Crystal it's the same (except that it's compiled to native code)


Incremental compilation isn't a good alternative to actually separate compilation. If I implement a module Foo that depends on another module Bar, the only information I need is the interface Bar exposes, so I shouldn't have to wait until Bar is actually implemented to begin implementing Foo.


It will simply stop compiling the code and it will say "undefined method 'talk' for Snake". You will get a similar error if `talk` is defined as an abstract method in the base abstract class. So abstract methods are just a standard way to document this and to improve error messages. There's no safety hole here.


So the behavior of a subclass of an abstract class depends on whether both classes are defined in the same module or in different ones:

(0) If Snake is in the same module as Animal, the error is that “john.pet.talk” is a call to a nonexistent method.

(1) If Snake is in a different module from Animal, the errors is that Snake doesn't have a talk method.

This isn't nice.


Can someone comments the differences between Nim (old mimrod) and Crystal? Which one do you prefer and why. Thanks.


Apart for different syntax:

Plus of NIM:

* powerful compiler can produce C, C++, JS, ObjectiveC code

* GC can be completely removed to adapt to the program

* support parallelism via threading

Plus of Crystal:

* use of types union permit to mix types in almost every data structure, let you pondering if the langugae is really strong typed

* so similar to Ruby that porting a 100-lines library (with no fancy-metaprogramming) to Crystal is often a matter of few hours

* use of green threads suits very nicely with HTTP request/response cycle (like GO and Erlang), where using threads/processes is more memory/CPU consuming

What Crystal still lacks is parallelism, but core team are working on that.

Said that both are modern, fast, elegant languages, with a good standard library and a vibrant community.


Googling: http://forum.nim-lang.org/t/1246

Python syntax with macros => Nim Ruby syntax => Crystal


I think the macro system looks really interesting. The main ideas seem to be that meta programming can be done anywhere and that everything is quoted by default in macros. What are thes kind of macros called, and where do they come from?


crystal on rails. i want this.


This may be the closest alternative right now: https://github.com/ysbaddaden/frost


Kemal is the most actively developed. https://github.com/sdogruyol/kemal


Stick with Ruby then: Rails is great to build monolith application easily, Crystal suits a different use (micorservices?). Also the lack of Ruby meta-programming facilities (send overall) will make to rewrite Rails with Crystal really challenging (pointless?)


what would you rename rails to at that point? :)


Great stuff. I love Ruby. Any idea when this will be production worthy, or at least stable enough to trust on internal projects? Will the language change much going forward?


Actually it's been stable for the last 6 months. No more expected major syntax e.g changes upwards. (apart from parallelism and gc which will be at compiler)


They should really mention that on the website for crystal. I think a lot of people are scared off by seeing the words alpha and breaking changes. Changing those to "beta" and "stable" on the site would help a lot IMO. At the very least calling it stable would be huge.


What advantages Crystal has compared to other more modern languages that aim to compete with C? For example D and Rust.


It is Ruby done right if community would have bothered with performance.


I don't think it's really trying to compete with C, it's a much higher level language, just one that happens to be fast as well.


Its similarity with Ruby? It doesn't _feel_ like I would have to learn completely new paradigms


It's somewhat similar, but in Crystal you need to be explicit with types, and you'll likely miss 'send'.

Also Crystal has macros


Neat, I will check this out. Although, if I am honest I really wish there was "Crystal for python".


Nim (http://nim-lang.org/) is really as close as you're going to find.


For anyone familiar with Crystal, are there any plan to add pattern matching into Crystal?


Good question about pattern matching in Crystal. I found this reply by asterite (the creator of Crystal) to a similar question that is worth reading:

https://crystal-lang.org/2014/06/19/crystal-0.1.0-released.h...

The question was asked in the comments as with the reply. I'm sorry that I can't link to it directly but it's the seventh comment from the top.


> Have you ever dreamed of a programming language as beautiful as Ruby but as fast as C?

Yeah, I discovered Common Lisp back in 2006 and have been using it ever since …

I discovered Go back in 2009 and have been using it ever since, too.

What does Crystal get me that these two don't?


ruby like syntax that I miss when using go :|


Does it have any benefit in particular over lua/luaJIT?


off the top of my head, I would say it's similarity to Ruby means that Ruby devs would have an easier time learning the language than learning an entirely new one. Crystal also seems to have an extensive standard lib, which lua does not. (that's not a criticism of lua, just an observation. I know luarocks has made the lack of a large stdlib much less of a pain point for lua)


probably the ruby syntax (built-in regular expressions) and whatever else. I'm not sure on easy embeddability as well.


Someday, someone should write a Ruby VM in Crystal.


Crystal isn't fit to carry Nim's jockstrap as far as I'm concerned. Ruby's syntax is verbose and illogical compared to Python's. Plus Crystal has a much more restrictive license.


Stopped reading when I saw the `end` keyword... the most annoying part of Ruby.

Edit: I'm getting hella downvoted but I'm leaving this here. Ruby fanboys can't silence me!!! ;)


The syntax of a language is about the least interesting thing about them. I don't care about curly braces, end keywords or semicolons as long as it is consistent (and fairly easy to parse, so there can be good tooling). The interesting parts are in the semantics, type systems and runtime features.

Choosing a programming language based on the syntax is like choosing your significant other based on looks alone. You're going to be spending a lot of time together, what's inside is what counts.


I've heard this said numerous times, so I'm going to disagree on the record. I think syntax of a programming language is a very important characteristic.

A language with a nice syntax is easier to learn, easier to read and understand, and delightful to write.

Crystal's syntax is a great differentiator between it and its statically typed, garbage-collecting competition.


Exactly! I guess people used to the Ruby syntax are just desensitized or don't know how much of a joy programming can be without having to type 'end' everywhere.


The end keywords is just a minor detail and can be inserted by a smart editor anyway.

Things like list comprehensions and do notation are examples of nice syntactic sugar.

Sensitive whitespace like Python can be nice to write and read, but it's hard(er) to write a parser for it and that's why Python's tooling hasn't been great in the past.

That said, Ruby is even harder to parse and I don't like it aesthetically either.


I think the same could be said about the joy of not having to have perfect whitespace.


Or judging the performance of a sports car by its colour.


We (think we) have good reasons for that: https://github.com/crystal-lang/crystal/wiki/FAQ#why-isnt-th...


I wonder how Lua can get by, being embedded in literally every game that supports scripting, in databases and many other tools.

It's a personal preference of course.


Yes, it's my preference. I also don't like reading/writing Lua. Python doesn't need end, neither does Crystal.


Python has "significant white spaces", Crystal doesn't.


You don't have to use do; end or do |a,b|; end in Ruby; you can use { } and {|a,b| }, even for multiline. There are options.


I seldom whine about blog posts but his one clearly justifies as click-bait. Not because of it's otherwise good content. But if it takes me a Google search instead of having within the very first sentences a link to the obvious target,

https://crystal-lang.org/

there is something f~*#ing wrong with the article.


If you require the use of a GC, you can never be as "fast as C" in the general case, since there will always be specific cases where manual memory management will do something more efficient than what an automatic GC would have done --- just like a car with an automatic transmission can never be quite as efficient as a car with a manual transmission used competently.

If you want to talk about efficient GCed languages, you have many choices, most of which have more tooling and mindshare advantages than you language has tooling advantages.

Really, GCed languages are commodities these days. A lot of people have put a lot of work into the fundamental building blocks, and now people are just combining them in various ways.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: