Hacker News new | past | comments | ask | show | jobs | submit login

>speed and correctness do not have to compete ... they can go together, hand-in-hand

Is Rust considered slow these days outside of compilation speed?




for the type of computation described in this article, Rust is considered slow. I recently stumbled across a site to track it's progress in changing that: https://www.arewelearningyet.com


Have we duplicated X yet?

I wonder how this kind of thing will fare a 100 years in the future, where X is drawn from a set of functionalities that is much and much larger. We would essentially get stuck on a single language.


Not fast in this sense, but definitely not slow.


Not fast in which sense? HPC? Or single/multithredded processing on one machine?


In the sense of - where Rust can do some amazing program transformations because it’s quite demanding about memory safety, ATL purports that if the math is all tensors, then there’s potential in program transformations based on automated reasoning about tensor formulae. Which no one needs for FizzBuzz but sounds like something worth exploring for HPC.


Back in the days the compile times where legendary for being slow and that fact has stuck around but this is no longer true today.


Rust code is not slow to compile today? I'm currently learning myself some Rust, have a project with some dependencies + ~1500 lines of code in my directory and my debug build with cargo takes 20 seconds every time I change just a line. Specs: Intel i7-7600U (4) @ 2.800GHz, Samsung NVMe SSD SM961/PM961, 16GiB.

Considering the small size of the project, I wouldn't consider that fast and in fact, makes me very turned off on the prospect of the project growing any bigger.


That sounds quite bad to me. It can happen if you have a virus scanner going crazy on your build target during build (which is a common complaint on Windows, you might have to add your Rust projects to an exceptions list). It can also happen if you have really heavy proc macros or other compile-time magic going that needs to reprocess and reexecute often (serde famously can crank up compile times).

I doubt your compile times would really ramp up that much from your project itself getting much bigger. If you are at 1.5k lines of code, the ridiculous compile time is probably due to other causes.


These horrible compile times happen both on Windows and Linux. I am using Serde, but only on two out of 15 structs in total. I haven't really tried to investigate yet if these compile times is just because of my project, but I am using some "standard" crates (like tokio and serde) that the community seems to overall use for most user-facing software, so hopefully they won't "infect" the compile times too much.

Thanks for the pointers though!


No, it's quite fast.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: