Hacker News new | past | comments | ask | show | jobs | submit login
On Rust and Nim (andreaferretti.github.io)
133 points by _qc3o on Feb 22, 2015 | hide | past | favorite | 152 comments



Pretty fair commentary.

Rust certainly isn't one of those languages where you can just pick it up, play with it for a day implementing an algorithm in it to get the feel of it and learn 'the complicated stuff' later.

Other languages let you get away with that quick start style; you can write a lot of python before you need to write a plugin, and a lot of c# before you start using unsafe code, etc.

Rust doesn't afford you that luxury.

Lifetimes, mutability and single ownership are BAM, right in your face from the start.

It probably puts a few people off... but hey, you know the analogy about tools and toolboxes.

Rust is for writing fast, secure, cross platform code. There's nothing else out there that offers the same; it's not a case of use Rust or Nim, or C++; rust is literally the only language that offers these features right now.

You can certainly write code that happens to be secure, fast and cross platform (eg. in C++), and if you don't need those features (or dont care), you're almost certainly better off picking a different language (like Go or Nim) that are 'fast enough' and 'secure enough', and don't restrict you in the same way Rust does, or something far more productive (like python or javascript) if all you need to do is smash out a product.

That's perfectly ok.

We don't need a language which is everything for everyone all at the same time.

Rust is very good at doing what it does; and it's the first time C++ has had a real challenger. I, for one, am really looking forward to the dynamics between the two crowds going forwards.


Rust is for writing fast, secure, cross platform code. There's nothing else out there that offers the same.

That's a weird way to put it. There are a lot of languages that offer the same (weak) security protections Rust does, and a lot of cross-platform languages, and a lot of fast languages, and lots of combinations of those attributes.

What I think you mean to say is that there aren't a lot of combinations that don't have a GC runtime.


Based on our analysis of security bugs in large C++ codebases, I wouldn't characterize memory safety as "weak"—the vast majority of critical RCE bugs in C++ codebases are due to memory safety issues.


Most modern code isn't written in C/C++, and yet regardless of the mainstream language it's written in, vulnerability hunters still find game-over issues. Eliminating opportunities for memory corruption is an unalloyed good, but let's be candid about how much of the whole software security problem that solves.


Memory safety is a defense against certain classes of vulnerabilities, no more and no less than that. I've always been careful never to claim that memory safety eliminates all security vulnerabilities, or that people won't find game-over attacks against apps written in Rust. Still, I don't agree with the characterization of memory safety as a "weak" defense—it's a defense against what are far and away the most common classes of critical vulnerabilities that we see in C and C++ programs.

I agree with you that there's nothing special from a security point of view about Rust if you're, say, a Python, or Java programmer (though the non-security-related safety features—for instance, data race freedom—may be interesting). Whether Rust is a security advance for you really depends on your starting point and what you consider to be non-negotiable. If you're a Java programmer for whom memory safety is non-negotiable, Rust isn't a security advance, but could be a performance improvement. If you're a C++ programmer (like us in the browser space) for whom C++-level performance is non-negotiable, then Rust isn't going to be much of a performance improvement, but it is a security advance relative to what we had to work with before. Basically it's about eliminating the tradeoff between performance and a class of security problems—whether that's a security advance will depend on where you started from.


Thanks to C/C++, there's no end of grousing on the web about the dangers of a lack of memory safety. Coverage of the other security issues that can crop up in a language -- for example race conditions or buffer reuse -- seems to get shorter shrift.

Are you aware of a good writeup which covers all of these different classes of threats in an organized way? I'd love to see something that categorizes everything that we have to worry about at a PL level. Bonus points if it points to languages that solve the given problem or research toward doing so.


Which are the reasonably popular, fast ("like C") memory safe languages?


Virtually all of the languages that aren't C, C++, and Objective C have the same (for all intents and purposes) memory safety property as Rust, so you can just check out the Programming Language Benchmark Game site to answer that question.


Nim doesn't without the Boehm GC (it segfaults if you send a pointer to another thread). D doesn't without the garbage collector. Go doesn't with GOMAXPROCS > 1. These are often languages that are brought up in discussions like this...


The problem is how much safe is safe in these discussions.

Rusts thread's safety is always brought up as part of the whole memory safety model and it is a very good goal to achieve.

However, if we were free of the typical C style memory corruption brought upon the world of computing, there would be a lot of more safer applications.

Even if those applications wouldn't be thread safe.


Nim also doesn't have memory safety in general. See: https://news.ycombinator.com/item?id=9050999


Right, well. I'll be very impressed if Rust manages to perform as slowly as the GC'd languages on real world, idiomatic code, with safeties on (not C# unsafe blocks, although the C# compiler and JIT require plenty of coercing even then). I tried with the CLR and found it very hard to get the things Rust/LLVM do. But I'm not that experienced so impressing me isn't too hard.


I think the question is "what is the cost" of the more novel rust safety features? In my experience, the cost is pretty high.

When you set Rust next to, e.g. swift, swift is phenominally more productive and similarly performant. And it is also quite safe (relative to C) without having all of the Rust safety features that make me want to kill the borrow checker with a rusty spoon.

I really wish Rust walked back on the more esoteric forms of safety (like thread safety, which you can't really verify anyway, you just promise the compiler that it's safe) and it tried to drive a bargain like Swift.

Rust really isn't satisfied to adopt the safety features that are "fairly low effort", it wants to break new ground without really measuring developer cost (hard to do since it's not widely used). That's really my issue with it.


> When you set Rust next to, e.g. swift, swift is phenominally more productive and similarly performant. And it is also quite safe (relative to C) without having all of the Rust safety features that make me want to kill the borrow checker with a rusty spoon.

Swift is totally garbage collected: every object is atomically reference counted. (This is necessary for compatibility with Objective-C.) So Swift is essentially in the same category as Go, Java, and most other languages as far as memory management is concerned.

Garbage collection is simply not a cost that we wanted to pay with Rust, in order to reach performance parity with C++.

> I really wish Rust walked back on the more esoteric forms of safety (like thread safety, which you can't really verify anyway, you just promise the compiler that it's safe) and it tried to drive a bargain like Swift.

I don't understand "you just promise the compiler that it's safe". Rust prevents you from accessing data in a non-thread-safe way without a mutex or atomics. This is in fact something that just fell out of the memory safety features above and didn't require any extra features: see [1] for an elaboration on this point.

> Rust really isn't satisfied to adopt the safety features that are "fairly low effort", it wants to break new ground without really measuring developer cost (hard to do since it's not widely used).

We have been measuring developer cost all along, by writing hundreds of thousands of lines of code in the language as we've been developing it: the Rust compiler is written in Rust, and we've been developing Servo as well.

[1]: http://smallcultfollowing.com/babysteps/blog/2013/06/11/on-t...


Swift is not garbage collected in the slightest. ARC is not garbage collection.

http://sealedabstract.com/wp-content/uploads/2013/05/Screen-...

Once upon a time Objective-C had a real garbage collector but that temporary interlude has not, nor ever will be supported from Swift.

Finally, Swift objects (can be, as compiler decides) laid out on the stack, so unless you have some super sweet stack garbage collection technology you'd like to share with us, they aren't garbage collected.

In any case, I don't see what garbage collection has to do with safety or the borrow checker.

> I don't understand "you just promise the compiler that it's safe"

Perhaps you'd like to read the Rust Book:

> When a type T implements Sync, it indicates to the compiler that something of this type has no possibility of introducing memory unsafety when used from multiple threads concurrently.

I think "you just promise the compiler that it's safe" is an accurate summary of that feature.

> We have been measuring developer cost all along, by writing hundreds of thousands of lines of code in the language as we've been developing it: the Rust compiler is written in Rust, and we've been developing Servo as well.

Would you be satisfied if you wrote a language that is productive only for language designers writing web browsers and compilers? Because that is all your measurements prove.

I'm telling you, as an intermediate Rust developer doing neither of those things, that I am not sure if I am more productive in Rust or C. Rust is certainly safer, but I am not convinced that safety is worth returning to C-like productivity. Performance is, but I can achieve good performance in Swift.

Rust assumes that I value performance and safety equally; e.g. that I am willing to give up productivity for either one. In reality I am only willing to give up productivity for performance. There are some low-hanging safety fruit that I want (like optionals) but the higher-cost fruit like multithread safety I don't want. I think those features are Bad (TM).


> Swift is not garbage collected in the slightest. ARC is not garbage collection.

Reference counting and tracing garbage collections are two ends of the spectrum of ways to handle dynamic allocation graphs (aka "garbage collection").

http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf

> In any case, I don't see what garbage collection has to do with safety or the borrow checker.

The borrow checker is what allows Rust to be safe and performant without garbage collection. It's literally the feature that gives Rust that power.

> I think "you just promise the compiler that it's safe" is an accurate summary of that feature.

A lot of programming (not just in Rust) is like this: the programmer promises that the library interface they expose does what they say it does, promise that any use of ctypes (in Python, for instance) won't corrupt the interpreter state, etc.

This isn't much different. The standard library will follow those that guidelines (i.e. something in the standard library implements Sync if and only if it satisfies that definition) and we expect library/application authors to do the same. The rule is to provide the programmer with assistance, it's not just to be annoying. It's your own fault if you shoot your own foot off by disobeying it (hence throwing away the assistance the compiler can provide you).

In any case, Rust has the `unsafe` keyword, a structured mechanism for making promises to the compiler, allowing one to easily see the possible locations that could cause memory corruption/unsafety. Overriding the compilers judgement that a type isn't thread-safe is no different: it requires `unsafe`.

> the higher-cost fruit like multithread safety I don't want. I think those features are Bad (TM).

FWIW, I think the multithreading safety is the really interesting and awesome part of Rust. Pretty much no other industry language attempts to tackle the problem of writing safe, low-level, high-performance, highly-parallel/highly-concurrent programs. Rust's system for this is quite general and flexible.


> Swift is not garbage collected in the slightest. ARC is not garbage collection.

Arguing about definitions is not going to be productive, but that slide does not present the definition of garbage collection used by most people who specialize in memory management. Reference counting is not tracing garbage collection, but it is garbage collection. See the excellent memorymanagement.org glossary [1], Wikipedia [2], or David Bacon's papers [3], etc. etc.

> Finally, Swift objects (can be, as compiler decides) laid out on the stack, so unless you have some super sweet stack garbage collection technology you'd like to share with us, they aren't garbage collected.

Escape analysis is a common technique to reduce allocations (used in Java and Go for example) but it falls down a lot in practice: it is typically unable to deal with higher-order functions, for example. It is hard to predict when it happens, and that's not a cost we wanted to pay. (Furthermore, in a highly-optimized generational garbage-collected system, reducing allocations doesn't help performance very much, because bump allocation in the nursery is so fast: one of the downsides of Swift's system is that allocation is much slower than in a generational tracing system, so it has to rely on escape analysis to regain some of the performance loss. This was needed for compatibility with Objective-C though, so it's understandable.)

> In any case, I don't see what garbage collection has to do with safety or the borrow checker.

It's central to the borrow checker's existence (and the lifetime system in general). Memory safety without garbage collection is a central design goal of Rust, and the borrow checker is part of the means to achieve that. We could have just used garbage collection (like Swift and most other languages did), but then we would suffer a performance loss.

> I think "you just promise the compiler that it's safe" is an accurate summary of that feature.

No, that's not accurate. Sync is an unsafe trait [4]. That means that you cannot implement it without opting into the unsafe sublanguage of Rust by typing "unsafe".

The unsafe sublanguage is primarily used to implement features (such as vectors or smart pointers) that would otherwise have to be built in to the compiler. Since the compiler implementation itself is not proved correct in any production compiler, this doesn't result in a net loss of safety compared to any other language. You can turn off the unsafe sublanguage entirely via an attribute or a compiler switch, and if you do so then thread safety should be absolute: if you can violate thread safety, it's a compiler bug!

Many other languages, including Swift, have unsafe "escape hatches". That doesn't compromise their safety, because you can avoid them and turn them off entirely.

> Would you be satisfied if you wrote a language that is productive only for language designers writing web browsers and compilers? Because that is all your measurements prove.

While it would be flattering to assume that the Rust team wrote the 100,000 lines of Servo, we aren't that productive :) Rather, much—perhaps most at this point—of Servo has been written by people who have never touched a line of code in rustc. In fact, a lot of it has been written by people who have never done systems programming before! (It's not just Servo: the authors of Skylight, for example, who use Rust in production, were not systems programmers or compiler hackers before coming to Rust.)

> I'm telling you, as an intermediate Rust developer doing neither of those things, that I am not sure if I am more productive in Rust or C. Rust is certainly safer, but I am not convinced that safety is worth returning to C-like productivity. Performance is, but I can achieve good performance in Swift.

> Rust assumes that I value performance and safety equally; e.g. that I am willing to give up productivity for either one. In reality I am only willing to give up productivity for performance.

Swift's approach—garbage collection via pervasive atomic reference counting—has significant performance costs over that of Rust. See Hans Boehm's slides [5]: Swift's approach is equivalent to "Boost thread safe", while Rust's approach is "C expl. free". (Note that those slides are old and do not take into account modern scalable mallocs like tcmalloc and jemalloc, the latter of which Rust uses: since these slides were published, the performance of thread-safe malloc has gotten much closer to thread-unsafe malloc, while the cost of atomic reference counting has stayed unchanged except by advances in atomic instruction performance at the CPU level.)

> There are some low-hanging safety fruit that I want (like optionals) but the higher-cost fruit like multithread safety I don't want. I think those features are Bad (TM).

The thread safety features fell out of the memory-safety-without-GC features. It's an added bonus: we could remove it and switch to thread safety for all objects, but there's no reason to, because opting into thread safety only when you need it is a massive performance gain. So again, the thread safety just boils down to memory safety without garbage collection. Swift opted into garbage collection, which is a totally defensible choice given their constraints, but let's be candid about the tradeoffs: Swift is not in Rust's category at all.

[1]: http://www.memorymanagement.org/glossary/g.html#term-garbage...

[2]: http://en.wikipedia.org/wiki/Garbage_collection_%28computer_...

[3]: http://researcher.watson.ibm.com/researcher/files/us-bacon/B...

[4]: http://doc.rust-lang.org/std/marker/trait.Sync.html

[5]: http://hboehm.info/gc/nonmoving/html/slide_11.html


Well, Swift uses reference-counting. There aren't GC pauses, for example.


Not to turn this into an argument about definitions, but reference counting is a form of garbage collection [1]—tracing garbage collection is the one that has pauses (but of course has many advantages over reference counting).

[1]: http://www.memorymanagement.org/glossary/g.html#term-garbage...


I know it's a form of garbage collection, yes. My point is that it lacks some of the issues that other types of collectors have.


It's not just a question of definitions: I wouldn't characterize reference counting as "striking a bargain" at all. RC is just a form of garbage collection, and in its atomic form (like in Swift) it has serious downsides relative to tracing GC. Reference counting helps latency (important in mobile, which is why Apple's choice is defensible) but it pays enormous costs in throughput when compared with tracing (not to mention the problem of cycles), so much so that tracing is usually considered the superior approach unless you have special requirements. Rust's approach of manual memory management is designed to eliminate the tradeoff by allowing prompt reclamation without all the overhead of managing reference counts, which is why it's in a separate category entirely.


Fast, restrictive and secure have been done before - see ATS, Cyclone, etc. Almost everybody ignored those trailblazing efforts, probably because those languages are very intimidating.


> Rust is very good at doing what it does; and it's the first time C++ has had a real challenger.

Ada and Modula-3 were there first, they just weren't adopted by OS vendors at large.


On the other hand, Rust does have a few more guarantees than Ada or Modula-3- memory safety without a GC while deallocating memory, data race safety while sharing memory, stricter (as far as I can tell) aliasing semantics, etc.


While true, had Ada or Modula-3 became widespread instead of C++, we would be discussing about logical errors nowadays, not about pointer misuses.


Additionally, their categorizations of "fast", "secure" and "cross-platform" are far too general.

I don't even think Rust officially supports that many platforms yet. It's strictly OS X/Windows/Linux, and the latter depends on glibc unless you're willing to throw away a ton of the standard library and third-party crates to start from scratch.


It's true, Mac/Windows/Linux is first tier, but we also test Android in CI, and possibly iOS? And there's a few BSD hackers who keep things reasonably up to date. We're investigating a way to allow community-run CI servers for platforms we don't officially support.

Being based on LLVM, we should able to support a wide variety of things, though there are C compilers for all sorts of exotic platforms, of course.


iOS is supported but unfortunately is "manually" tested, for example snapshot of alpha2 is broken.

Version which is going to build with iOS support can be found on https://github.com/vhbit/rust (pre-built binaries on releases page)


I too have suffered Rust's religiosity on the floating point issue.

In Python, given an array xs = [3.1, 1.2, 4.3, 2.2], I can write

    xs.sort()
and get [1.2, 2.2, 3.1, 4.4]

In Haskell

    sort xs
In Swift

    sort(&xs)
In Rust you have to spew this monstrosity

    xs.sort_by(|a, b| a.partial_cmp(b).unwrap_or(Less))
The Rust position appears to be that sorting an array of floats is unreasonable and so you must be "punished" by not being allowed to use the built-in .sort() function.


Whenever I've seen people complaining about Rust religiosity, is mostly because other languages trained them to be careless about certain things.

Rust cares great deal about PartialEq and Eq because you can have stuff like non-deterministic float NaN sorting or Run-time errors as demonstrated by previous posts.

Or Rust cares about doing manual memory management safely, but since you are arriving from either C/C++ which cares little about doing it safely, or GC language which cares little about memory management, you find it obnoxious.

Hell, even the TFA mentions how hard it is to use Hash, I'm pretty sure that's because Rust is aiming for actually making the hash work correctly and not allow you to just randomly compromise yourself.


Perhaps in some cases, but how does that apply here?

People won't stop needing to sort arrays just because the syntax is made heavier and obscure.

It's fine to punish people for coding in the wrong way, but here I have to accomplish a task (sorting the array) and so the punishment serves no purpose at all except for making my experience worse.


     > It's fine to punish people for coding in the wrong 
     way, but here I have to accomplish a task (sorting the
     array) and so the punishment serves no purpose at
     all except for making my experience worse.
Then Rust way is fine. It's merely 'punishing' you for not thinking things through at compile time (that floats are sortable), instead of at run time (dependent on input). Maybe you prefer to have fail dynamically, and that's fine, but that's preference, and the goals of Rust do not align with it.

If you find relying on it often, why not write a macro? So:

       xs.sort_by(|a, b| a.partial_cmp(b).unwrap_or(Less))
becomes:

      sort!(xs)
Also honestly, if this is a very common use case bring it up on Rust forums maybe people will add a macro. They added macro for generating N elements for a vector.


Why a macro? Why not just define that comparator as a function?

    fn value_nans_last<T: Float>(a: &T, b: &T) -> Ordering {
      match (a, b) {
        (x, y) if x.is_nan() && y.is_nan() => Ordering::Equal,
        (x, _) if x.is_nan() => Ordering::Greater,
        (_, y) if y.is_nan() => Ordering::Less,
        (_, _) => a.partial_cmp(b).unwrap()
      }
    }

    xs.sort_by(value_nans_last);


I haven't done sorting, so no knowledge of sort interface in Rust, but maybe so it works across different sortable collections?


There's no need to use a macro: you can abstract over different kinds of sortable collections using a trait. (That said, there's no real reason to generalize IMO: by far the most common case is sorting slices.)


> If you find relying on it often, why not write a macro?

Why not a function? I think people turn to macros in rust sometimes because of the necessity to think about the semantics of function calls. Should I use references or values? What are the trade-offs?

The amount of thinking you have to do to perform a good "extract method" refactoring is one of my least favorite things about the language, but falls out of some of my favorite things about it. All in all, I think it's worth the trade.

edit: Expanded a bit.


I'm curious why you characterize it as a "punishment"? Do you actually think that is why anyone designed the system this way, as some kind of aversion therapy just to get you to avoid using floating point numbers?

All Rust is doing here is implementing IEEE 754 floating point semantics as specified, and implemented in lower level hardware. Part of the IEEE 754 specification that you need to deal with if you want to use floating point numbers is NaN, which represents not an actual value but the absence of a value, an indication that your computation did something that could not be represented. Because NaN is one of the possible values of a floating point number, a typesafe interface for comparing floating point numbers must, by definition, be a partial function; and as a partial function, it cannot be relied upon for implementing sort.

The Rust way of doing this does not punish you, it just ensures that you actually think about, and deal with, edge cases like this. While that can seem cumbersome in a small, one liner example like the above, or can seem restrictive when you're writing a simple personal project and you know that you will never encounter a NaN and so you just want to sort the values without thinking about that, it can be quite valuable when programming in the large; when you are working on a program larger and more complex for any one person to know and reason about the whole thing, type safety allows you to encode certain constraints in the type system that ensure that you don't make mistakes.

For instance, if you write code that depends on sorting a list of floating point values in Python, like in your example, and write all of your unit tests and design using non-NaN floating point values, then use a third party library that winds up producing a NaN value, you are likely to be quite surprised by the outcome:

  >>> nan = float('nan')
  >>> xs = [1, nan, 0, 3, 5, 2]
  >>> sorted(xs)
  [1, nan, 0, 2, 3, 5]
Now, deep in the middle of production code, that result might not be so apparent. Most of the values that you care about are ordered correctly; but eventually you'll hit the fact that the 1 is sorted before the 0, so you'll have some strange, hard to reproduce bug, that depends on the precise ordering of the original array.

What Rust is doing is not punishing you, but instead just making you make that decision about what to do about such a case up-front, before you accrue that technical debt that comes to bite you later on.

There are several possible ways you could deal with it; one is the one you mentioned, where you just define some way of comparing NaN so that you now have a total order. Another would be to panic any time you try to do an undefined operation like comparison on a NaN. Or you could work with a type that is restricted to non-NaN values, and deal with the issue only at the boundary of components which convert between arbitrary floating point values and your restricted subset (and any operations that may produce NaN values).

In order to not have to write those cumbersome sort expressions by hand every time, if you need to work with floating point numbers with one of the non-standard semantics described above, you could define any of the above behaviors by creating a newtype the wraps floats but provides the semantics you want. Since they are static types, there will be no overhead on the values, they will just be represented as floats; you may have some overhead on your checked operations that wrap the underlying float operations, but that the price you pay for going with semantics which are not the standardized IEEE 754 semantics as implemented by the hardware. In a larger project, if you need such a type, it's not all that much work to just define that type once with the semantics that you want, and then just use that everywhere rather than using one of the native floating point types.

So, what Rust is providing is a type-safe, low-overhead implementation of IEEE 754 floats, without providing certain conveniences that would make your life easier when dealing with a subset of them but cause problems when working on the full range of values.

Can it be convenient to accrue technical debt in order to get things done quickly? Sure. Shell scripts are a classic example; almost every non-trivial shell script will have some kind of quoting bug, delimiter bug, confusion between arguments and flags if an argument value ever contains a "-", or the like. But because they are familiar and allow people to get things done quickly, they can be really useful for little one-off hacks, especially when you're working with data that you know is simple enough not to hit one of those edge cases, like filenames where you know that none of them contain spaces.

However, you need to be really careful about that sort of thing. That kind of quick and loose reasoning can quickly come to bite you if it gets deployed in production in an uncontrolled or even hostile environment. All of as sudden, the things you thought could never happen will happen. I've seen a seemingly innocuous shell script for cleaning up a few particular types of files turn into an "rm -rf *" due to a bug in handling of spaces in filenames (and yes, an actual customer lost actual data due to this bug).

So, is Rust appropriate for that kind of fast-and-loose exploratory programming that the shell or dynamic languages like Python allow you to do? No. If I were working with known inputs, interactively, where I could easily tell that I didn't have NaNs and could check the outputs to make sure they were sane, I would choose Python and numpy, or Julia, or something of the sort that was more appropriate for rapid and loose prototyping.

But for software that will be deployed in the wild, where I need to write modules that will work with values provided by other modules that I don't control, or the like, making you think about this kind of thing up-front can help you avoid having weird, obscure, hard to debug problems, or even security vulnerabilities, down the line.


Bob needs to sort an array of floats.

Bob tries xs.sort()

Bob gets an error message.

Bob googles "how to sort an array of floats in Rust".

Bob pastes in

    xs.sort_by(|a, b| a.partial_cmp(b).unwrap_or(Less))
Bob continues on his merry way.

No safety has been added, no technical debt has been avoided. It's not any less "quick and loose".

The need to sort arrays of floats doesn't disappear simply because the Rust designers will it to. The code will still exist, but it will be longer and less maintainable. This is what I mean by punishment.

If anything, Rust has given you a false sense of security. The modules and other code you work with will still be handling NaNs incorrectly.

My preference, all considered would be a .sort() that pushes the NaNs to the front or back (but is slightly slower), and a .sort_unsafe() that assumes no NaNs but is faster.


I'm still reeling from how cynical and dismissive this comment is. The Rust developers don't like IEEE 754 any more than you do, but it doesn't change the fact that IEEE 754 is what hardware implements. I encourage you to take your complaints up with hardware vendors and the IEEE, as well as with with the popular programming culture of blindly pasting SO answers into your programs in frantic attempts to get them to compile at any cost. As for me, I very much appreciate that Rust is conscientious enough to throw up a red flag and force me to realize that this seemingly-simple task is actually quite complex, rather than implicitly imposing a leaky abstraction.


Well, hopefully first answer will be stack overflow which will address this topic :P Rust community has been hard at work, answering a lot of Rust question. I don't think I've seen many unanswered so far.

Well, when Bob found sort_by and a weird partial_cmp he should have paused there and look at docs if he understood nothing. In case of Python he is rightly to blame Python for silently doing stuff for him that he doesn't like.

It's like a speed bump. If you go fast over a speed-bump, ignoring it, you suffer the consequences. It's different than a dark unlit road that just has a sign THIS WAY.


> Or you could work with a type that is restricted to non-NaN values, and deal with the issue only at the boundary of components which convert between arbitrary floating point values and your restricted subset (and any operations that may produce NaN values).

How often do NaN's appear in practice? I think it makes perfect sense to handle IEEE 754 floats like this, but maybe rust should follow your suggestion here and provide a new totally-ordered float. Maybe `f64` should _be_ this totally-ordered float and IEEE 754 could be imported if you need to use that one?


That would likely make operations on the default float slower than they need to be. After all, the CPU is probably implementing IEE 754: How do you handle it when a NaN bubbles up from below?


When you write programms for robots, you can't afford such "maybe" things. I think Rust will be perfect for robots.


And what should be the result of a divide by 0 in your magical totally ordered float type?


You are cool. I'd like to read your twitter or blog (link please). I don't see here any personal messages to ask it another way. I just respect professionals with knowledge.


I don't know much Rust. But your comment just made me appreciate Rust a lot more!

To me, this signals that they value long-term safety over short-term convenience. Any other choice, IMO and IME, is short-sighted.


Lets see after you try a big project in Rust, if you come back with the same appreciation.

Most of the times i think is a overburden, I know people like Haskellers like to suffer and are masochistic coders, and may like this, but i dont.. already have to deal with C++, and Rust did achieve the impossible.. Its even more over-enginnered than C++, and its not even faster


IME, us "masochists" suffer much less in the end. We "suffer" a few seconds here and there when we get a compiler warning, forcing us to think edge cases through.

Then, later, we suffer much less through runtime debugging, QA, and get a lot less calls at 2AM.


> Its even more over-enginnered than C++,

Can you explain more about what you mean by 'over-engineered' here?

> and its not even faster

We are faster sometimes, and we haven't even put time into optimizing things. We're also slower sometimes.


I think you should replace 'time' with 'a lot of time'. I've seen several commits that optimize things.


You mean a large project like Servo, or large project like Rust compiler itself?


What is the expected result if you have NaN in your list?


In python at least, it's a bit funky:

    >>> sorted(map(float, ['1', '2', 'nan', '4', '3']))
    [1.0, 2.0, nan, 3.0, 4.0]
    >>> sorted(map(float, ['1', '5', '2', 'nan', '4', '3']))
    [1.0, 2.0, 3.0, 4.0, 5.0, nan]


Ruby has it about right (imho):

  > [1.0,2.0,Float::NAN].sort
  ArgumentError: comparison of Float with Float failed


That's definitely the right behavior for Ruby—it catches a contract violation and fails at runtime, which is idiomatic. What's nice about Rust's approach is that it catches the possibility of that contract violation at compile time, and forces you to decide what to do about it before the code ever runs. The right thing to do in Ruby would be to catch that exception and handle it in some fashion, but there is no indication at the time you're writing the code that the possibility exists, so you're unlikely to handle it unless you have a strong awareness of the issue with floats and NaN. Rust encodes that awareness into the language itself, which actually limits the expertise you need to write good code.


Rust's way is more flexible though: You can choose how to treat NaNs, in Ruby you have to fail.


in Ruby you have to fail

Oh, really? ;)

  > [1.0,2.0,Float::NAN].sort
  ArgumentError: comparison of Float with Float failed

  > # let's make NaN sortable
  > Float::NAN.class.send(:define_method, '<=>') { |x| -1 }

  > [1.0,2.0,Float::NAN].sort
  => [1.0, 2.0, NaN]


That's just horrible. You just changed behavior globally.


I didn't say it's pretty.

Merely chose that example to point out the absurdity of challenging Ruby on the grounds of flexibility (of all things).

Obviously, in a real program you'd rather write a custom sort-comparator, use a wrapper-class, or monkey-patch only the specific NaN instances that you want to change the behaviour of.


Or use refinements.


No, it's beautiful. Being a Python programmer it took me a long time to appreciate the power to do things like this. Learning Elisp, Smalltalk, Io and JavaScript (well) certainly helped.

Also, of course you can scope such a change however you want, for example to a single block (and with threadlocals it's going to be mostly safe).


He was just demonstrating that it is possible. You can in fact alias the method, do the sorting, and then restore the previous functionality of throwing errors. Plus, someone already mentioned that you can also use refinements.


Right, and the Rust equivalent would be to define a newtype that wraps your floating point type and defines the ordering semantics you want; in Ruby, you have changed the behavior for everything that uses Floats (including other libraries that may depend on this behavior), while in Rust you can use your newtype, other libraries can use standard floating point behavior, and you won't have any confusion about who wants which semantics.


and the Rust equivalent would be to define a newtype

Which you can do in Ruby as well, as I pointed out just two comments below the one you're replying to.

The real advantage of Rust here was imho best explained by sanderjd; Rust can perform this check at compile time whereas in Ruby it's a runtime exception.


I don't have NaNs in my list. That's an invariant I would be happy to express in the type system.


Then you need another type. A float has NaNs and Rust knows it and prevents you from possibly shooting yourself in the foot.


A 'newtype wrapper' has your back in that situation, which lets you do exactly that.


Yep, and that type can have a total order and work with the `sort` method with no further ceremony. That actually might be a nice type to have in the standard library. It seems like it would be widely useful, but I'm not sure where it would fit in on cargo.


I had a crack at this. This is about the third Rust program i've ever written, so it's probably chock full of noob mistakes:

    #![feature(std_misc)]
    mod natural {
      use std::num::Float;
      use std::iter::IntoIterator;
      use std::iter::FromIterator;
      use std::cmp::Ord;
      use std::cmp::Ordering;

      #[derive(PartialEq, PartialOrd, Debug)]
      pub struct Natural(f64);

      impl Natural {
        pub fn new(value: f64) -> Option<Natural> {
          match value {
            x if Float::is_nan(x) => None,
            _ => Some(Natural(value))
          }
        }
        pub fn new_all<'a, A: IntoIterator<Item=&'a f64>, B: FromIterator<Natural>>(values: A) -> B {
          let b: B = values.into_iter().map(|f| Natural::new(*f).unwrap()).collect();
          b
        }
      }

      impl Eq for Natural {
      }

      impl Ord for Natural {
        fn cmp(&self, other: &Self) -> Ordering {
          self.partial_cmp(other).unwrap()
        }
      }
    }

    use natural::Natural;

    fn main() {
      let fs = [3.0, 1.0, 1.0];
      let mut xs: Vec<Natural> = Natural::new_all(&fs);
      println!("before = {:?}", xs);
      xs.sort();
      println!("after = {:?}", xs);
    }
In particular, the assignment of the return value of new_all to a local is ugly, but i couldn't figure out how to please the type checker without it.


Neat! My version with some mostly superficial changes[0].

Note that we haven't actually removed the panic in the `Ord` implementation! Which is because we've eliminated what we believe is the source of the ordering uncertainty (the NaN), but the type system still doesn't know that.


Whoops, didn't post the link: http://goo.gl/7AZxa6


In Python you basically get undefined behavior when encountering NaNs while sorting, Rust let's you choose. This seems much more reasonable to me.


To be fair, you also have the choice of implementing your own comparison function in Python (and likely Haskell, Ruby, etc) if the position of NAN actually matters to your algorithm.


You're unlikely to know there's a problem and do so.

Rust is reminding you of this fact, which is very nice.


I find it amazing (for good) that Nim, a language that was made in obscurity by a handful developers managed to be compared to a high visibility language backed by many organizations and people.


Rust is trying to do something far more complicated. Memory-safety enforced by the compiler is a big deal, but it remains to be seen if it's practical. I am really looking forward if rust can make it work, zero-cost memory-safety would really be something amazing to have in a language.

Nim seems to be a nice, low-level, gc-ed and practical language. It doesn't do anything radical, it's pretty small and it does what's already there right. It could become big.

This kinda reminds of linux vs minix. Doesn't mean it will play out the same way, but still.

Rust feels a bit difficult to get into right now, but if they smooth over the impractical bits, it could become the systems language in 10 years. Nimrod feels like a faster version of python or ruby, something a lot of people would like to have.


Nimrod feels like a faster version of python or ruby, something a lot of people would like to have.

Hell yes!

Nim really looks like it might have the potential to become the "faster Ruby" (or faster Python) that many of us are waiting for.

For all the progress in academic (Rust, Haskell) and special purpose (Go, Dart) languages, a new iteration on the "general purpose workhorse" is more than overdue.


I hadn't made this connection before, but now i see it put like this, i'm interested in Nim.

The only language to have entered this "faster, statically typed, and generally less surprising Ruby" niche so far is Go. Other languages which are faster, safer, and saner than either Ruby or Go come with various showstopping problem: Java has too much baggage, Scala and Rust are too difficult, Clojure is too scary-looking, etc. Go, despite being fairly mediocre, it combines some concrete advantages over Ruby with a very low barrier to entry.

Nim, though, looks like it should do this even better. It apparently has the same straightforwardness as Go, similar performance, but with even less verbosity, a more powerful (but not scary!) type system, and comprehensively more modern facilities.

I don't know if Nim has some equivalent to Goroutines, but i think Goroutines are overblown anyway. Go isn't really all that great for concurrency, and the people i know who are using Go aren't using it for concurrency.

I believe that the decisive fronts will be mindshare and tooling.

I have no idea how Nim can build mindshare on the same scale as Go; i don't know if the current level of grassroots interest can grow, or if it needs a corporate backer like Google, a celebrity figurehead like Rob Pike, or some technical hook like Goroutines. Perhaps someone will build something amazing in it, and become a poster child.

Tooling is clearer. Go has some simple, well-liked tooling in the box: go fmt for formatting, go vet for linting, go fix for version upgrades, and go get for dependency management. It also has a bit of a weird story around compiling and linking, but it all works in practice. For Nim to overtake Go, it will need an equally good or better story about all these things. Fortunately, this doesn't seem all that hard; the most important area is, IMHO, dependency management and building complex projects, and Go's tools are pretty poor here. go get is simple, but the lack of versioning is a huge hole. Maybe someone should just write a Gradle plugin for Nim?


One thing people never seem to mention is how dead simple it is to cross-compile Go code. For me to consider Nim or Rust, I need to know that I can do something as simple as GOARCH=arm GOOS=linux go build.


Cross-compilation looks pretty simple in Nim:

http://nim-lang.org/nimc.html#cross-compilation

Since Nim uses C as an intermediate representation, cross-compilation support should be as good as your C compiler's.

Rust's cross-compilation seems to not be that great at the moment. Although i believe that's because it hasn't been done yet, rather than having been done badly.


Rust's cross-compilation works, but it's not as easy as it could be. One issue is that you need a cross-compiled version of the standard library lying around.


Isn't that an issue with every compiled language? I remember always having to cross-compile newlib when I set cross-compiled gcc environments.


It's true, but you can make this easier or harder, depending. We don't make it particularly easy. It's a mater of polish.


For Go you don't. I found it quite awesome when I could cross-compile something for my raspberry-pi from a Windows PC.


Does this mean they ship with the cross compiled libraries installed?


Cross compiling Nim is also dead simple. I even managed to compile for an unsupported Unix with very minimal knowledge of Nim.


On the other hand, if you run "an unsupported Unix" that implies you are a bit more tech-savvy (and willing to put in the effort) than most when it comes to these issues.


My impression is that Nim was in the right place at the right time and unintentionally piggybacked on the interest surrounding languages such as Rust and Go. This perception may be completely wrong. However, I do remember all these languages started gaining popularity within the span of several months. Before that Nim (then Nimrod) existed, but remained in obscurity.


I do remember a well-timed post on reddit/r/programming about Nimrod (as it was called back then), shortly after Go's release. It contained a lot of "Go is disappointing (to put it mildly), this looks so much better"-comments about the language. The backlash against Go was pretty huge.

I don't remember the guy behind Nim doing much trash-talking, which probably helped as well, especially in the long run.

EDIT: fixed derailed sentence.


The problem of Go is the hypocrisy of its community when it comes to the expression problem. Obviously Go isn't expressive at all. Which make it verbose when one tries to write abstractions with it.

A lot of devs just want a fast,type safe,memory safe language,that doesn't need a hungry VM to run but is expressive enough so "scripters" feel at home. Why is it so hard to get a language that does that? IDK .


There are actually several, of common lineage: OCaml, F#, Haskell.


> > expressive enough so "scripters" feel at home

As much as I love all things Haskell, there is no way it fits the "scripters can use it" bill.


Doesn't F# work with .net and the CLR ?


It does run on .NET, yes.


Yes, F# runs on the CLR.


Exactly my feeling and I also feel that sometimes we underestimate the "ability and efficiency" of small teams. I start to believe that a small core team (not to read few contributors) can have a larger impact than an organization.


certain core systems components are often developed by small teams or individuals even within large organizations. the .net GC was written/maintained by 1 dev for a long time (Patrick Dussud, later Maoni Stephens), the Windows thread scheduler was written and maintained by Dave Cutler over many releases, etc.

some development efforts are just really hard to scale out.


Language maturity takes time. When Go and Rust went public, there were already languages in the very same niche, aiming to be a C++ replacement, and developped in the open.


Go and Rust had the very same self-positioning as C++ replacement, but because of different visions of what C++ is for they are now in largely unrelated niches. It is hard to imagine, for example, that someone would choose Go to rewrite Webkit, or LLVM, or any C++ math/modelling library (for other reason than proving it's possible). Just as an idea of writing website business logic in Rust would make me scratch my head.


While I agree that writing backends in Rust may not be the best, Crates.io uses Rust as a backend for Ember, and it seems to be working out really well. I'm skeptical, but interested to see how it develops.


Then check out Iron.rs and Nickel.rs - web-frameworks in Rust :) Yes, Rust is more complicated language but sometimes you are ready to pay this price just to get back joy of programming. And at some point in time, bugs in runtime (because of types or memory safety or race conditions) becomes so annoying, that you are ready to be thankful to any tool which can find them on compilation, even with price of more verbose code. "Typing is not a bottleneck".


"Joy" -- I feel like I spend all my time bookkeeping -- which is about the least fun thing imaginable. The guarantees keep me interested, but just barely at this point.


If Rust had asynchronous I/O this would be less of a head scratcher.

Too bad that AIO and related, necessary primitives were forsaken for other priorities.


I think lots of programming languages are been created, and the vast majority of them fall into obscurity, but some have nice features and grain a bit of traction, which is what is happening right now with Nim. If you forgive me the comparison, I would say that Nim is to the programming languages what Flappy Birds is to the video games: a successful product built by a talented programmer in a place where lots of people try their own - of course, the analogy is not completely right since the barrier to create a programming language is much, much higher than to create a small video game.


The killer feature of Nim is its amazing syntax. If you know Python, Nim will instantly feel very, very familiar.

Rust looks incredibly useful to do systems level programming, and guaranteed pointer safety without GC costs is amazing, but it takes me much longer to figure out exactly what's happening in the code.


Surely what is or isn't amazing syntax depends entirely on your preferences and personal experiences. I can't stand the Python syntax. Significant whitespace is an instant turnoff for me. For me, figuring out Rust is a lot easier than figuring out Nim, because that's what I'm used to.


I second this notion. Thing I hate about Python is that it relies on invisible characters so two identically looking pieces of code aren't.


This is something I thought before I actually used python. In practice, two identically looking pieces of code ARE. Yes, you can mix tabs and spaces, but not in the same block. The moment there is ambiguity the interpreter errors out. So, in practice this is never a problem, your text editor should not be switching on you randomly and the official style guide strongly asks you to use 4space tabs. Simply follow the official guidelines and be able to configure your text editor, and you will never think about this again if you're actually using python.


> Yes, you can mix tabs and spaces, but not in the same block.

Assuming of course all code contributors memorized the official style guidelines. And that they are responsive to such changes. I've had one similar change, reverted three times during my uni project, within span of days.


>Assuming of course all code contributors memorized the official style guidelines.

Or live in a post 1990 world, and have an editor that can automatically apply this for them.


I have never found significant whitespace to be a significant problem in practice.

Very few people use tabs. 4 spaces is the norm.

I just don't get the fuss surrounding this issue. It just works and the benefits far outweigh any cost - real or imagined.


The so-called benefits always boil down to "this suits my preferences" which is frankly not a compelling argument in any way.

Edit: autocorrect ran rampant


> The so-called benefits always boil down to "this suits my preferences"

The of significant white-space are:

1. Reduction of visual noise and improved readability. I find this uncontroversial. Your eye parses code by indentation, not by curly-braces. I could remove the curlies from javasscript and if the indentation was correct, you'd be able to follow it.

2. Reduction of cognitive load. Instead of giving me two jobs to do: indent correctly and match braces - Python only gives me one of those jobs.

These are real benefits that I experience whenever I work with Python. There are also genuine costs* but I find the balance to be very much in favour of significant whitespace .

* restrictions on possible choices of syntax being the only one I think actually affects me in anything other than a theoretical sense.


Eh, most languages and tools nowadays reduce the risk of indenting incorrectly or not matching braces to near zero, so in practice there is little cognitive load either way.

Only serious cognitive load for me writing code is whether to add new line at 80 or less chars. That's about it.


No, the so-called benefits boil down to the fact that, objectively, no professional Python programmer has ever considered more than 10 seconds a day fighting this, if ever.

This doesn't take the fact that if you consider it to be an aesthetic wart then there's not much that can be done about it. But it does suggest that perhaps you should recalibrate your aesthetic senses towards something that brings so much more to the table than what it might detract.

To give you an analogy, it's like Java programmers complaining that lambda syntax is "hard to undestand". Sure, if you don't put in a little effort to learn it or work with it might, but you're missing out.


> It just works and the benefits far outweigh any cost - real or imagined.

Both the costs and benefits are largely subjective, so whether this is true of not varies considerably from programmer to programmer. Hence, why it is a perennial issue of holy wars in the community.


> Very few people use tabs. 4 spaces is the norm.

And if they're not using 4 spaces, they're not following one of the more important documents there is for coding in Python in a way that other people can easily read - PEP8.


Right. Before I even read the article my thought was "Nim better be easier or what's the point of the GC".


Looks like comparing apples and oranges. If Nim has a GC it would be more instructive to compare it with another garbage-collected systems language like OCaml.


Yes and no. In fact, Nim can be used without a GC. Actually the GC of Nim is written in Nim, which certinaly proves the point. Now, it is not really convenient to avoid the GC in Nim, and you certainly do not have the safety features of Rust, but it can be done when needed (and you are still writing in something more productive than C)


What's wrong with comparing a GC language with non-GC language?


Wildly different goals, given that there should be some interesting reason to avoid GC nowadays.


How are they "wildly" different. I can see different but "whildly" really?

GC is an implementation detail with some performance characteristics. Nim can turn its GC off. It can do a soft-realtime GC behavior where you limit its maximum time slice.


> Nim can turn its GC off.

At the expense of losing memory safety.


An interesting to reason to avoid GC would be a belief that you could make a usable general-purpose language without GC.


Or having code that's callable from another GC-ed language like Ruby/Python/etc. Two GCs dancing around each other (e.g. Python calling Nim) is a recipe ripe for problems.


Mostly that we already know what the answer's going to be.


Okay, so I'm having one problem with nim. Disabling all unsigned arithmetic by-default. The logic is actually fairly sound (it's probably, generally, harder to overflow a signed than an unsigned), but nim doesn't compile to object code; it transpiles to C and then C compiles to object/machine code.

Edit: it's obviously much harder to overflow an unsigned than a signed; in the sentence above, I was thinking particularly of underflow (which is what the nim devs reference as their logic for disabling unsigned arithmetic by-default).

The problem here is that unsigned arithmetic, though much easier to hit underflow, is fully defined in C where signed {under,over}flow is UB. As a result, if you manage to hit this case in nim, you're now going to hit UB by-default. This seems crazy to me. Am I missing something?


While unsigned arithmetic is not "enabled by-default", a simple `import unsigned` and you have it.

If you want the language to handle overflows for you, you can enable runtime overflow checks for your whole code or any specific part of it.

Otherwise, if you opt for no runtime checks and release builds with all optimizations, you indeed go into the same undefined behaviour territority as in C, so you'd have to prevent overflows before they happen.


I understand that it is available simply; I was questioning the logic of having it disabled by-default. Having the ability to do run-time checks for {over,under}flow does seem to make this issue a little better but doesn't explain the logic of having the language prefer UB by-default.

Yes, C does this too (integers are signed by-default), but if I'm shopping for a language that abstracts C, I'm probably looking for improvments over C's defaults.


Keyle [dead]:

I love nim. I've been using it non-stop since I've learnt the ropes. The only thing I'd wish was better results when googling for things. "nim" is just a very common word it appears. The site itself is a wealth of knowledge. I can relate to the comment of "feeling it's too big". Sadly the doco is not newbie friendly for some part (hi there, async). I've had no issues with the compiler but be sure to always use the devel branch. Things move fast in nim. -----


Well, it was named "Nimrod" before, but there were other problems with that name, which prompted the rename. "Nim" was a good choice because it was the file ending already. Also, easier to google than C and Go at least. Usually "nim-lang" works fine.


I'm curious why the post was deleted. Can mods shed some light on this?


They are shadow banned.


What is shadow banned and why am I?


keyle, you'd have to take that up with the HN adminstration, I can't help you.


Why dead??


>>the whole language is verbose: compare these 10 lines (https://github.com/andreaferretti/kmeans/blob/935b8966d4fe0d...) with this single line (https://github.com/andreaferretti/kmeans/blob/master/nim/alg...)

The Nim code's more concise in this case but the Rust code can be more clearly written:

  impl Add for Point {
    type Output = Point;
  
    fn add(self, other: Point) -> Point {
      Point(self.0 + other.0, self.1 + other.1)
    }
  }
The "type Output = Point" line isn't boilerplate, it makes it possible to define the result of an addition as something other than a Point. (Off of the top of my head I can't think of any use cases, but I'm happy with the capability personally).


The last time this post appeared, I actually submitted a PR which did this, among other things: https://github.com/andreaferretti/kmeans/pull/3

The post wasn't really updated.


Hi, author of the post here. I did in fact, update the post, mentioning your PR in the first paragraph.

I cannot really change the rest of the content: I am reporting the mail I sent, and that was it. Changing it after the fact would only add more confusion


Oh hey! My bad. I didn't expect you to, really, I know that I don't. And when the parent mentioned it, I made an assumption. Sorry :/


For addition I can't either, but you could use it to overload multiplication of two (mathematical) vectors to a dot-product

  impl Mul for Vec2{
    type Output = double;
    fn mul(self, other: Vec2) -> Double {
      self.0*other.0+self.1*other.1)
    }
  }
Although I must admit I'm not (yet) sure why you need to explicitly state the output and define it for the function. But I have only just started with Rust.


> Although I must admit I'm not (yet) sure why you need to explicitly state the output and define it for the function. But I have only just started with Rust.

I generally assume it's because it can't infer the associated type from function signature, for now.


A simple example for addition would be adding two u64s and getting a bignum type.


That's a good commentary. As for Nim, do we really need another unsafe language with a part-time GC? If you can tolerate a GC, there are a lot of good language options.

I have some issues with Rust's verbosity, especially in the error handling area. I recently bugged the Rust crowd into changing their overly complicated replacement for C's "argc/argv" approach to command line parameters.[1] They listened. The Rust crowd is trying hard in a difficult area and succeeding.

In C and C++, lifetimes and mutability are in your face from the start. It's just that the compiler doesn't help you with them. For years, I've been saying that the three big problems with C/C++ are "How big is it? Who owns it? Who locks it?". C gives no help with any of those. C++ addresses "how big", but it's not airtight, and C++14 tries to address "Who owns it", but only for new code, and it's not airtight. Rust aggressively deals with all three problems.

I can understand the unhappiness with Rust, though. It's not a comfortable language for many modern programmers, especially ones who've never done any constrained form of engineering. For them, I'd suggest Go and Python. Go can do pretty much anything you need to do on a web server, and fast. That's why Google created it. So can Python, but more slowly. Both are memory safe (well, Go isn't in multi thread mode.)

Understanding the Rust mindset can be hard. That's a documentation problem. The "Rust for Dummies" book has not yet been written. The Rust tutorial glosses over the hard issues, and the Rust reference is written for people who are into language design, compiler design, and theory. Until recently the language design had so much churn that most of the Rust material on the web is out of date. In another year, there will probably be a decent Rust book.

With Rust, you need a plan for who's going to own what before you start. Then you just have to explain that plan to the borrow checker. For a complex, mutable data structure, such as a DOM or a GUI's collection of interconnected widgets, this may take design work and design documents. If you plow ahead without thinking through who owns what, including in the error cases, you'll get Rust compile time errors. You would have hit trouble in C or C++ too, but it would have been in the form of a memory leak, crash, or security hole. Now you have to fix it up front.

There are performance wins in this. Someone recently commented on HN that they'd discovered that typing into a dialog box produced some insane number (thousands) of allocation events per keystroke. That was partly because, at many places in the C++ code, a c_str was being copied into a fresh String object. That's a consequence of being afraid to borrow a reference to a string you don't own, for fear of creating a bug. It's safer to make a copy. In Rust, the compiler will tell you if you can do that safely. If you need to make a copy, you can, but if you just take a reference and Rust allows it, the code is good. Big step forward.

[1] https://github.com/rust-lang/rust/pull/21787#issuecomment-73...


> The Rust tutorial glosses over the hard issues,

I'm not disagreeing, this is more of a survey kind of question, but which issues are the hard ones, to you?


It doesn't go into how structures and ownership interact. It's possible to create a tree with all elements single-ownership. The relationship between structs, enums with data, recursive enums with data, and ownership isn't discussed. Yet this is fundamental to doing any complex data structure.


Awesome, thanks. I should be addressing that in the future.


I love nim. I've been using it non-stop since I've learnt the ropes. The only thing I'd wish was better results when googling for things. "nim" is just a very common word it appears. The site itself is a wealth of knowledge. I can relate to the comment of "feeling it's too big". Sadly the doco is not newbie friendly for some part (hi there, async). I've had no issues with the compiler but be sure to always use the devel branch. Things move fast in nim.


> but adding a map function on Vector would have not prevented this more sophisticated use.

This was questioned, long ago, by Todd Veldhuizen in his Parsimony Principle paper[1]. The long-and-short of which is: do it!

[1] http://arxiv.org/abs/0707.4166


Do you create a new repository on GitHub for everything you want to write?


I would be curious on how Go would compare to these two on that example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: