Hacker News new | past | comments | ask | show | jobs | submit login
New Rust runtime turned on. What next? (mail.mozilla.org)
186 points by usea on Aug 8, 2013 | hide | past | favorite | 114 comments



Excerpt from Graydon's (BDFL) reply:

  > Despite all these caveats I have a very strong sense that writing the
  > runtime in Rust will go a long way to validate Rust in the domains it's
  > aiming for: concurrent and systems programming. Even in the task
  > scheduler, where there's quite a bit of unsafe code, the shared-nothing
  > nature of unique types forces you to consciously break the type system
  > to share memory, and that seems to go a long way to making parallel
  > programming easier to reason about.

  Yes, even just reading it seems much clearer since the mutability, 
  lifetime and ownership of each value and reference is spelled out, not 
  just "some Foo* that you have to remember special validity rules about". 
  It's noticeably easier to reason about. Really interesting!
When Rust matures (and this dogfooding-via-self-bootstrap is going to accelerate its maturation) it's going to be seriously revolutionary for, for example, modern multi-core video game development. Really excited.


Yes, please. Anything but C++. The progress of Rust and Go make me hopeful for the future.


Go (at least as it stands today with its non-generational, stop the world, mark and sweep GC) is wholly unsuitable for video game development where unpredictable latency must be avoided.


I don't completely disagree with your point, and I would not be a person to advocate Go for game development, but "wholly unsuitable" isn't true. A large number of successful games are released that depend on runtimes with similar garbage collectors. Most notably games running on the JVM and CLI. These are "real games" with real performance considerations.

See: Minecraft, Terraria, Magicka, AI War, and many more.


The JVM and CLR both have generational garbage collectors, which reduces the impact of most GC pauses quite significantly.

Go's GC on the other hand is non-generational, which means every GC pause must perform a full scan of all objects in the system.


Sorry, I misread your post as saying GC in general is wholly unsuited for game development. My mistake.


I've updated my post, sorry for the confusion!


Is there a reason why future work on Go's runtime couldn't fix this problem?


I wouldn't say impossible, but yes, there are reasons this is difficult to fix in Go compared to Java and C#, and intentionally so.

Quoting from http://talks.golang.org/2012/splash.article (emphasis mine):

"To give the programmer this flexibility, Go must support what we call interior pointers to objects allocated in the heap. The X.buf field in the example above lives within the struct but it is legal to capture the address of this inner field, for instance to pass it to an I/O routine. In Java, as in many garbage-collected languages, it is not possible to construct an interior pointer like this, but in Go it is idiomatic. This design point affects which collection algorithms can be used, and may make them more difficult, but after careful thought we decided that it was necessary to allow interior pointers because of the benefits to the programmer and the ability to reduce pressure on the (perhaps harder to implement) collector."


To be fair I don't think this is a blocker for Go to implement GGC and CGC. .NET supports interior pointers too, for example. You just have to design your allocator right to allow the card marking to work.

In practice I think that the biggest problems are compiler related and affect both Go and Rust. Go and Rust both use conservative GC on the stack and as a result they take a lot of shortcuts. For example, LLVM (and GCC as far as I'm aware, though someone like DannyBee may correct me here) loses the distinction between integers and pointers by the time they get to the machine instruction level, which makes a lot of optimizations easier to implement but also making it impossible to generate precise stack maps. Fixing this would be a lot of hard work. It's probably easier in the Plan 9 compilers, of course, though I suspect it's still going to be a lot of hard work.


You know a lot more about these things than I do.

That said: Aren't the equivalent pointers in .Net _only_ allowed/usable if you pin your object? If you tell the GC explicitly 'please don't move that, I'm pointing to that thing here'?

That's hard to bolt on to Go, imho. For Go it seems to be implicitly allowed, in .Net you have to ask explicitly?


Go's GC is non-moving, exactly because they can't precisely know what a pointer is and what is just some random int.

So it's basically as if all objects are pinned in Go. I don't expect that to change. There are probably tons of Go code out there which will break in response to a change.

I guess it will be pushed back to some mystical "2.0" release, along with Generics.


I've been following along on golang-dev, and they seem to be making progress on precise stacks.

See https://groups.google.com/forum/?fromgroups#!topic/golang-de... for example.


Not at all, it just takes work.


They could, but I expect their "worse is better" attitude will prevent it.


I haven't looked at the others, but Minecraft stuttered very badly for me. I got the distinct impression that real performance wasn't considered.


Agree for Minecraft, it has definitely "lags" to an extent that there are a lot of forums discussing how to tweak the configuration to improve the performance. The lags are very disturbing and I think it has to do with memory management because it sets in after some time of playing.


It certainly fits the profile of GC stutter. I'm told it makes some bad decisions about handling objects, triggering pathological behaviour in the collector but that's second-hand info at best.


Sounds like a driver issue. Minecraft works quite well for many people.


I disagree with your 'wholly unsuitable', I came across this game engine (Garage engine) written in Go which certainly doesn't jitter due to the garbage collector.

http://www.youtube.com/watch?v=iMMbf6SRb9Q

http://www.youtube.com/watch?v=BMRlY9dFVLg

https://github.com/vova616/GarageEngine

So I say your statement is exagerrated, certainly stop-the-world garbage collectors isn't ideal for games, but it's not 'wholly unsuitable' as long as the gc sweeps aren't costly enough to impact consistent framerate. Had they been 'wholly unsuitable' then the XNA platform would have been dead upon arrival.


When latency matters, just use pool allocation, which makes the GC irrelevant.


That's how people manage to make games despite GC, but I always thought it's a huge hack, having been there. If all you do with your GC is fight it, is it actually a good idea to have it?

One thing that makes me excited about Rust is that GC is optional.


Pools are still traced.


If you pool everything, the GC never runs and tracing is irrelevant. However, getting to there is very difficult. Better is to just make sure you throw almost everything away before the next GC, have a small permanent runtime set of objects and accept 10-20ms stop the world young gen collections.


Targeting 30fps, you have 33ms to render whole frame. Losing 20 ms of that to GC would make things… challenging.

And if you're targeting 60fps, that's 16ms. Stop the world for 20ms, and you've missed 1 and 1/4 of a frame.


There are concurrent collectors that do not stop the world (e.g. Azul's) that you might want to look at for game development.


Just avoid producing too much garbage then?


Why is it everytime people talk about replacing C++, someone always comes in and sticks Go into discussion? What does Go have to do with C++?


One of the original description/assertion of Go was as a "systems language done right". Many have not been able to move on from that to the effective "a somewhat better java".


I am intrigued by this. Could you (or anyone) give me a 'explain this to me like I'm a 5 year old' synopsis why you think this is going to revolutionise video game development?


Video game development has several constraints that aren't commonly found in other development such as web development. In particular, for real-time games running at 30 or 60fps (frames-per-second), you have only 33 or 16ms respectively to process an entire frame of updates. Most game development targets fixed hardware such as consoles (PlayStation, Xbox) or mobiles (iPhone, Android), so if your code is slow, you can't just put a bigger processor in or distribute the code over an array of servers.

These are other requirements mean that most high level AAA games have to be written to be very highly performant, and in particular, to have reliably consistent latencies. Traditional Garbage Collected languages (Java, C#), whilst achieving high throughput, often struggle with predictable latencies as when GC kicks in, particularly in Stop-The-World collectors, you can get a 10-20ms pause which means you will drop a frame or two. This leads to stuttering which is undesirable. Even modern generational collectors still struggle not to have occasional bad pauses. If something like Azul's continuously compacting collector becomes viable for games, this might be solved, but until it does it is hard (but not impossible) to write high performance (soft) real time games in a GCed language.

The result is that most games are written in C++ (sometimes with a small embedded scripting language, mainly Lua, which uses GC) where memory management can be controlled. The downside of this is developer time - it takes large teams of developers lots of time to write games, because the code is largely written at a low level.

Rust is higher level than C++ and allows many useful and safe idioms, including a more functional style and better use of immutability. In general, like other higher level languages, Rust would allow a developer to be more productive than in C++, allowing games to be developed more quickly, with fewer developers. It does this whilst still allowing manual control over memory - some parts can be handed over to GC, whilst others can be carefully allocated to the stack or heap and managed manually. This allows predictable allocation and cleanup overhead, and therefore more deterministic frame times.

As a games developer, I am extremely interested in Rust as a potential game development language.

Disclaimer: I haven't actually written any Rust yet, only looked at code and thought about the potential. I've spent 5 years doing AAA console development in C++.


This is a great answer, thank you. I have a day job as a .NET developer but I am bored and looking into developing (indie) games (I envy your 5 years AAA experience!). I have tried Lua and Go and like both. They seem both simple enough to be productive in but with some very powerful features at the same time (e.g. concurrency in Go, tables and coroutines in Lua). I have tried Clojure (as a language, not for game development), but the 'mental overhead' for me is just too much. Rust seems to be a bit more heavy on syntax and features, but I will have to look into it a bit more to find out if that is justified.

I have tried Unity (especially since I could leverage my C# experience) but as a developer I like the 'code-first' approach rather than being tied down in a graphical tool with scripting capabilities (especially for 2D games?).


I wouldn't dive into Rust right now if you're trying to get into game development at the same time - leave that for when you're a bit more used to general game dev.

Using Lua inside a development environment/engine could be a good start - there's a couple of Lua-based development frameworks that might be worth checking out. Start with 2D games as they're much simpler to get going with and get used to how a game engine will look.

After that, it depends what you want to do. If you mostly want to create some interesting games and care more about the gameplay design side, using a pre-existing engine/framework is best. If you are interested in the development and high-performance side of things as I am, then learn C (not C++) and OpenGL. This is not as hard as it might seem. The best OpenGL tutorial I have found is http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Cha... - this will get you writing graphics code from scratch and starting to understand what is going on. You don't need to become a graphics expert, but actually understanding how this side of things works will be invaluable.

If you have questions, my email is in my profile and I can be reached there. Happy to answer individual queries. I'm not currently working in the Games Industry (day job is Scala in Finance) but my side project of the last 2 years is a cross platform engine + game written in C and Lua.


> sometimes with a small embedded scripting language, mainly Lua, which uses GC

Note that Lua 5.1 and later use an incremental GC, which is friendlier for latency-sensitive workloads.


"most high level AAA games have to be written to be very highly performant"

On non-custom hardware like x86 and x86_64 Azul's continuous GC as of when I last checked requires a software write barrier, which is pretty expensive, a 20% or greater penalty.


I, too, think Rust is going to be revolutionary in the video game industry. Were I a game programmer I would start learning now. In two years at least 51% of all new video game code will be in Rust.


I think you may be overestimating the pace of change. C++ is quite entrenched for industrial-strength game projects. The network effect is strong, the tooling is mature, and the projects are largely driven by C++ experts. Even if we see Rust 1.0 by the end of the year, it will take a lot of time for that kind of change to occur in the industry, if it ever does.

I think you're much more likely to see 51% of new game code written in C# for Unity. I think the median game budget is shrinking. More and more games are being made as small projects by small teams. Steam greenlight, kickstarter, humble bundle, and other avenues are enabling curation and funding of smaller games. Unity is far more suited for this kind of development than C++. If you look at smaller studios' job listings, a lot of them prefer experience with Unity. The trend is in full swing at this point.

I have no sources to site for the above information. Take it with a grain of salt.


In addition to your point, I'd like to stress that it is the game engines that are C++. Many engines provide a higher level interface for developers actually make games. You probably won't hook the game developers on Rust. If you want to pitch Rust to the engine guys, you're going to be battling against their toolchains that have been developed for decades and have some of the best static analysis tools under the sun. Not to mention, you'll also be battling against the traction of their current codebase, which they'll probably hesitant to rewrite in a new language. From a business perspective, I don't think management in some of the larger companies would let that decision fly, either.


All good points but there is one strong argument in favor of Rust: memory safety. It seems to be the norm for games these days to CTD (crash to desktop) sooner or later. I find that unacceptable to the point that it makes me angry.

C++ makes it way too easy to corrupt random sections of memory.. which leads to random CTDs. Rust was designed from the ground up to prevent these dreadful bugs.

AAA games are overwhelmingly written by relatively inexperienced, overworked cowboy coders on a tight schedule who are expected to write optimized code. Expecting C++ code without memory corruption bugs in that situation is simply unrealistic.

As a customer I truly hope the industry adopts Rust or another language where memory corruption is impossible or only possible if you explicitly ask for it. I am so sick of CTDs.


If you want to pitch Rust to the engine guys, you're going to be battling against their toolchains that have been developed for decades and have some of the best static analysis tools under the sun.

I've never used static analysis tools for C++, but from what I've heard, the reason they need to be so advanced is because C++ is so difficult to analyse. Rust, on the other hand, with it's strong, static type system, may not even require extra static analysis tools to be used effectively.


This is a specific instance of a general argument that applies to all new languages: there is an existing infrastructure and people won't want to switch. Yet new languages are arriving at a pace faster than ever on the server side, despite massive investment in server stacks for existing languages.

If Rust succeeds as a language suitable for games and gains traction there (and that is of course an if), I think the truth will be in the middle: there will be a huge amount of code still written in C++, and that will continue to be maintained and work. But new code might be written in Rust. Rust is designed to integrate well with C and C++ code, so mixed projects are quite feasible: in fact, both rustc (because of LLVM) and Servo (because of SpiderMonkey) are such mixed projects.


Eventually something will indeed replace C++ for gaming. You can look at the emergence of all the new server side languages in two ways: either one will "come out on top" as something akin to what C/C++ is now or there will be much more specialization in what we now call "systems programming". If the future is the latter, engines might always be done in C/C++ simply because it gives them the ability to easily make bindings to the majority of languages (ie Rust). I believe this will be the middle ground of which you speak. It really would've been cooler if there was more incentive to build languages specially for games, especially since the domain has some interesting demands such as the close coordination with GPUs and memory allocation in general.


  > engines might always be done in C/C++ simply because it 
  > gives them the ability to easily make bindings to the 
  > majority of languages
Just like C++, it will be possible to expose a C-compatible interface to Rust code that will allow any language that can call into C to call into a library written in Rust. See http://brson.github.io/2013/03/10/embedding-rust-in-ruby/ for a rather dated proof-of-concept, or http://bluishcoder.co.nz/2013/08/08/linking_and_calling_rust... for something more recent.


Didn't realize Rust had support for this. At the end of your recent example, it pretty much says doing so sacrifices the runtime entirely. I know Go doesn't support calling from C for reasons similar to this. Where does further development on this functionality sit on the priorities list for Rust?


Indeed, being able to forgo the runtime entirely is what makes it possible to rewrite the Rust runtime in Rust (as per the OP). Wouldn't be very useful to have a runtime that itself required a separate runtime. :)

As for "further development", what would you like to see? Currently the most prominent capabilities of the runtime are the lightweight task system and garbage collection. It might be theoretically possible to allow uses of the task system to gracefully degrade to using system threads, and likewise it might be possible for uses of the GC to degrade to refcounting (without cycle collection).


Being able to run runtime-less and still use the language is quite high on the priorities, and I believe moving GC to pluggable library types and cleaning up the standard library feed into that.


You don't actually have to sacrifice the runtime entirely anymore; since the runtime is written in Rust you can just start it manually if you want. Making this smoother is a priority. (Brian mentions this in the email, regarding #[start].)


That's very cool. I definitely need to check out things out. Thanks everyone for the interesting discussion.


I think it may be about 2 years before the games industry even starts to pick up Rust (or possibly a post-Rust language instead), but I think it will happen eventually.

As someone else mentioned, C# is a big deal now and comes with a degree of safety over C++. I don't expect to see Rust in the pole position on the whole, but it could easily displace C++ at Id Software since Carmack has been shopping for an alternative, and elsewhere (especially if he runs with it first.)

But I think C++'s share is losing ground, and Unity is an impressive platform. The asm.js buzz seems to whisper promises about ES and browsers becoming a serious AAA gaming platform. If FFOS gets anywhere it may supply the necessary voltage. So the lay of the land is anyone's guess, but all the variables eat into C++'s % share even if only by expanding the market around it.


You forget a few points:

* Games require portability. Not many platforms will have a rust compiler (Not to mention that current C++ compilers are very mature and highly optimizing).

* Current C++ libraries. They are probably heavily templates-based and will be difficult to port.


> * Games require portability. Not many platforms will have a rust compiler (Not to mention that current C++ compilers are very mature and highly optimizing).

Rust uses LLVM as the backend, so any platform that Clang supports, Rust can too. (And also, it has the optimisations built in.)

In fact, there's already support in the compiler for x86, x86-64, arm, and mips. (I'm not sure if mips actually works, but arm definitely does.)


Really the portability that matters in this case (games) is probably Windows, which llvm and clang do not target very well at this point.


We've been focused on first class Windows support since day one. The only major issue for LLVM itself that I'm aware of is the lack of PDB debug info, which is less of a problem for Rust because the system debuggers don't debug Rust in the first place. Most of the clang problems for Windows that I'm aware of relate to all the MSVC extensions in windows.h, which is not a problem for Rust as it doesn't use system header files.


I don't know if you'll see this, but I've been wanting to contribute to Rust for a while now. I've checked through the Github issues, but it's not really clear to me what I should do if there's one I think I can do. How should I approach it?


Thanks! There are "E-easy" and "A-an-interesting-project" tags on GitHub that you can check out if you're interested.


I would think the harder platforms are the consoles: they often come with their own compilers that only support C and C++.


Wasn't that in part due to PPC or custom silicon? The 360 and PS4 are now on x86-64, it should be possible for them to use a more sensible toolchain.

also, even before that there were people using "non-standard" toochains e.g. http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


The PS3 and Xbox360 won't go away in the next 5 years. The issue is not so much PPC (which should get support anyway), but various other silly things, like needing to rely on certain interactions between the compiler and runtime library. This is not a huge issue, but any extra bump will deter people from trying.

And indeed it's possible to have an alternate toolchain, but lots of work that is best avoided most of the time.


> dogfooding-via-self-bootstrap

Very rare thing. Ok maybe not in language design. But rare otherwise.


I think its much less common in language design then we might give it credit for. Many languages run on a VM that is written in C including the likes of Java, Javascript, C#, Ruby and Python ( even PyPy compiles down to C ). I am sure there are others besides C that are self sustaining, but they are _very_ rare


I think you have a somewhat inaccurate mental model of how things are done.

For one, it doesn't really make sense to talk about the CLR when you talk about bootstrapping C#. My project (the Roslyn C# compiler) is a 100% C# implementation of the C# compiler. One thing to keep in mind is that we don't target the CLR.

The C# compiler is not a compiler from C# to the CLR, it's a compiler from C# to the CIL (Common Intermediate Language). It's completely reasonable to imagine a machine which runs CIL in hardware instead of in a VM. In this case the answer to the question of whether or not the C# compiler is bootstrapped is, "Yes. Completely."

Moreover, it wouldn't make sense to ask whether or not the CLR is self-hosted -- it's called the "Common Language Runtime" for a reason. It's a language-independent virtual machine. In this sense, implementing it in C++ makes just as much sense as implementing it in machine code.

Now, my argument here was meant to be without loss of generality. To ask whether or not a language is bootstrapped shouldn't really depend on a runtime, since any language which requires a runtime cannot, by definition, have the runtime written in said language. In this sense, we should only ask whether or not the compiler is bootstrapped, not the entire environment.


We can imagine something that runs the CLR in hardware, and we can imagine something that runs the JVM in hardware ( actually those things exist , but they don't and both the CLR and JVM are incapable of creating programs that _can_ run on hardware without C shims.

Rust is in almost the same boat since they are using the LLVM, but the LLVM differs from the JVM and CLR since it can generate stuff to run on hardware without a shim.


  > actually those things exist , but they don't 
Not sure what you were trying to say here; one "Java in hardware": http://en.wikipedia.org/wiki/PicoJava


Really big formatting fail. There should be a close parenthesis after the exist. I know the JVM in hardware implementations "exist", but they aren't real implementations since they lack many of the OS like features (Threads , I/O , ... ) That the JVM gives you. Even though we can run byte code in hardware we can't just copy a class file to a processor and execute it. It still needs a run time and a OS to make it full spec.


It can be possible to write the runtime for a language that requires one in the language itself, as long as the runtime is only necessary for part of the language. Then, the runtime would merely have restrict itself to the parts that don't require a runtime.


Of course, but my emphasis on "requires" was meant to deal with those cases, as it can be argued that that would constitute two languages -- one managed and one native.


Writing the runtime for a language in that language is fairly unusual, but writing compilers in the language they compile is a similar idea and has always been a popular activity.


I haven't written anything in Rust yet but one of its most interesting aspects is its support for linear types. I expect that will share the same kind of perspective enriching attribute of learning logic, array or functional paradigms. While it's certainly not the first language to support substructural types, it looks the only one with a decent chance of developing a meaningful ecosystem.

The linear logic of J.-Y. Girard suggests a new type system for functional languages, one which supports operations that ``change the world''. Values belonging to a linear type must be used exactly once: like the world, they cannot be duplicated or destroyed. Such values require no reference counting or garbage collection, and safely admit destructive array update

http://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.htm...

An interesting bit of trivia about linear types is they are in some sense the closest thing to programming a quantum computer this side of qubits (where no cloning dictates qubit variables can only be used once in a function term).


I believe Clean also uses linear types.


Here’s a properly formatted version of the link: https://gist.github.com/roryokane/6189765

(The original message is in Markdown; I just put it in a Gist where it would be rendered as such.)


I am no language designer, but I wonder why use libuv and have to worry about implementing a scheduler and all the other components of a run time loop in your language when the kernel will do this for you. I think it would make more sense to provide a better interface to existing kernel structures then leverage a third party library and then re implement kernel functions around it ( I am mainly thinking about the paragraph about the current scheduler implementation and how basic it is )


Three reasons. First, we don't control the kernel and we don't want to make assumptions about thread spawning being cheap on every OS. Second, it lets us implement work stealing, which is a proven method for dynamic parallelism. Third, it lets us do some operations such as RPC from task to task entirely in userspace with no trip through the scheduler or OS kernel.


Good points. A counterpoint for #3 is that you could do kernel bypass RPC by installing a driver, but Rust developers probably don't want to write all those drivers, and Rust users wouldn't want to install them.

Work (task) stealing is very compelling, and a little paradigm shift is no bad thing. If Rust or any new systems language stands a chance, it should aim high and not too close to the past.


And then drivers would be required for each OS creating an additional porting burden.


What if someone wants to write a kernel with Rust? This might be a very naive question taking into account my ignorance of language and kernel design.


I've done some very preliminary (and bumbling) work with that at https://github.com/ldunn/kern/

It's rudimentary, but it does compile and it does run, and I'm not aware of any particular obstacles to a more featureful kernel.


Look at https://github.com/pcwalton/zero.rs . It's not very far yet, but it proves that it is viable to run Rust without a runtime.


I actually don't think we even need zero.rs anymore. Its purpose was to provide noop implementations for all the "lang items" that the compiler expects to find in the stdlib. However, post-0.7 the compiler no longer requires a lang item to be provided unless you actually use a feature that requires it.

For an example, see the code at https://github.com/doublec/rust-from-c-example , which is fully runtimeless.


zero.rs (or something like it) is still required, because there's certain lang items that are required/useful (e.g. #[start], destructors, failure, vector bounds checks).

(e.g. https://github.com/huonw/rust-malloc/blob/master/zero.rs)


Someone posted https://news.ycombinator.com/item?id=5771276 a few months ago. I'm not sure we could call that a kernel, but it's a first step.


> it does implement TCP and UDP on both IPv4 and IPv6

I thought this sort of thing was usually handled by the OS. Can you even get raw sockets on most systems? Or by "implemented TCP", do they mean they can give you back a Unix TCP/UDP socket?


By "implement" it means "integrated into the scheduler".


I really wish I had space in my TODO list to start a project in Rust. I don't think there's another dev tool I'm more excited about.


I started porting one of my side projects (written in C++11) to Rust because I thought it looked promising - I still think it does, but I filed two bug reports before I even got argument parsing working, and the documentation is still rather sparse. I don't want to dissuade you, it's a great project, and I hope it's the next C++, but it's very much not production-ready, so don't build anything critical in it yet.


I'm feeling the same way, Rust looks fantastic. There are a few things in the language I'm not huge fan of, I don't like overly subtle things in a language. For example, some of the functionality around semi colons seem like they will be a common source of stupid programmer bugs that are difficult to track down. Perhaps the compiler will catch that stuff.

Go kinda ruined other languages for me with multiple return values, its something I miss when working in every other language now. And reading through the docs I keep hoping I would stumble across that, though that would create a lot of problems when interfacing with C. Type inference is a huge win, standard libraries looks solid, love the potential around marking variables as mutable, and any language that has no null values makes me happy.

Overall, I wish I had more time too.


> For example, some of the functionality around semi colons seem like they will be a common source of stupid programmer bugs that are difficult to track down. Perhaps the compiler will catch that stuff.

It will, because the semicolon in Rust actually has a semantics impact: `a` has type T(a) but `a;` has type Unit (~void).

> Go kinda ruined other languages for me with multiple return values

That's sad, because Go has one of the worst MRV implementations out there: it's a special case of the language itself.

In most languages with MRV — and that includes Rust, but also MLs, Haskell, Erlang, Python or Ruby — MRV is simply a natural consequence of being able to unpack or pattern match containers (tuples and/or lists depending on the typing discipline).

Anyway Rust has multiple return values, don't worry about that.


I have limited experience with Rust, but the semicolon stuff you're describing has been one of the most surprisingly positive aspects about it for me. I thought it sounded kind of like a dumb gimmick that just adds subtlety (read: removes simplicity) to something for no reason. However, in practice I've found is really great for making the intent of your code more visible (less cluttered). I miss it when I'm doing C#.

Also, I've never come across a situation with too few/extra semicolons causing any kind of logic errors. The compiler will complain at you if you get it wrong.


I had the same experience; having briefly touched Matlab and loathed its difference between semicolon (don't print result) and no semicolon (print result) I was very dubious of it—why not just put `return` there? But when you combine it with the almost-everything-is-an-expression way of doing things, it actually works really well. Makes some forms of state machine exceptionally elegant, for example. So much so that Python has lost some of its charm for me.

Also, the type checker ensures that you'll hear about it if you lack a semicolon and emit a value other than unit from a block, without then using it.


Go is not the only language with multiple return values. Lua for example had them before Go. In Lua it would look like:

  addsub = function(a, b) return a + b, a - b end

  a, b = addsub(32, 44)
And if you run that code with LuaJIT it compiles down to a few machine code instructions, no object creation at all, no function call.

In contrast I think at least in Python the tuple based solution would be horribly inefficient because it allocates/deallocates a new object every time just to pass values. I don't now if the Rust compiler is smart enough to optimize the tuple creation away.. but I doubt it.


  > I don't now if the Rust compiler is smart enough to 
  > optimize the tuple creation away
I'm sure it can. LLVM is a really, really good backend.


Multiple return values is possible with n-tuples.

  fn addsub(a: int, b: int) -> (int, int) {
      (a + b, a - b)
  }

  ...

  let (a, b) = addsub(32, 44);


Indeed, the behavior described here is just a consequence of allowing pattern-matching when assigning variables. Say you wanted to do the Python trick of swapping two values:

    let a = 1;
    let b = 9;
    let (b, a) = (a, b);
    printf!("a: %i, b: %i", a, b);  // a: 9, b: 1
...or say you just wanted to grab a single item out of a tuple:

    let x = (1, 2);
    let (_, y) = x;  // the underscore is the pattern for "ignore this"
    printf!("y: %i", y);  // y: 2


Just for comparison:

  b, a = a, b 
is actually valid code in Lua (and works as expected).

Grabbing only one value looks like this

  _, y = returnsTwoValues() -- grab only the second 

  y = returnsTwoValues() -- grab only the first


This works similarly in Ruby:

    a,_ = two_values()
    _,b = two_values()


Ah, very interesting. Thanks for posting that. Looks like I have one more reason to dig into Rust.


Rust does have a lot of potential, but don't underestimate how much change it has undergone lately, and will likely be undergoing in the future.

The large amount and rapidity of change makes it difficult to use it seriously. It's nowhere near as stable as other newer languages like Go and Scala are, for instance.

Some experimental programs I wrote a mere 8 months ago are now basically unusable with recent versions of Rust due to language, syntax and standard library changes.

Unless you can constantly track Rust's development on a daily basis, and update your code accordingly, I'd be very hesitant to suggest using it for anything but throw-away code at this point.


This is true, though I do believe it is a lot more stable than about 12 months ago(?)

It reminds me of Go a couple of years ago, but that had "go fix" which would mostly update your source files to the newest revisions. That made early adoption a lot nicer. I'd love to see a "rust fix"


> tasks are now migrated across threads by the scheduler, whereas in the old scheduler a single task was always run in the same thread.

Is that done by having a single task queue with multiple schedulers, or through work-stealing by schedulers with no ready tasks in their queue?

Would this open the possibility of configuring schedulers (including individually)? E.g. ensuring a given task stays pinned on a specific scheduler, and said scheduler accepts no more task, that kind of things?


The implementation on master currently uses a single queue, but very shortly it will be converted to work stealing.

Tasks can be 'pinned' to their own scheduler (i.e. thread) with `spawn_sched(SingleThreaded)`, and this is very important for tasks that call foreign code that blocks.

That's about the extent of the configurability at the moment, but I anticipate at least one other 'mode' in the future for coping with blocking tasks that don't want to be pinned to a specific thread.


> Would this open the possibility of configuring schedulers ...

I've been assured that it's the plan. I have no idea how much of it works right now though. I've only done one build with the new rt and it doesn't involve tasks.


> I've been assured that it's the plan.

Excellent [twirls mustache]

> I have no idea how much of it works right now though.

Yeah I don't expect it to work at this point, but knowing it's one of the end-goals is good.


> Is that done by having a single task queue with multiple schedulers, or through work-stealing by schedulers with no ready tasks in their queue?

Right now it's the former, but Aaron Todd has a pull request to switch it to the latter.


So they are still using libuv after the rewrite?


The rewrite was essentially to facilitate using libuv more. To paraphrase Brian Anderson: "we needed to rewrite io, and we've just taken a detour to port the runtime."


Yes (as the Windows IOCP support is invaluable). But we will probably need to add threading support to it.


What do you mean add threading support? Last time I used libev (I thought libuv was just libev plus Windows) it allowed creating multiple loops for use in multiple threads just fine. Do you mean adding the ability to move file descriptors across threads into another loop?


As a sidenote, libuv is not using libev anymore

https://github.com/joyent/libuv/issues/485


Interesting, thanks!


Yeah, that's what I mean.


I tried to give Rust a try to build some stuff but my project required HTTP and there's no easy SSL solution in place right now. Hope it comes along. I don't have time to contribute much otherwise I would.


To quote Brian Anderson (the OP), one of the next steps in the I/O rewrite is "Implementing a new HTTP client on top of rt::io, possibly using Chris Morgan's HTTP code, for use in Servo". So hopefully within the year we'll see the beginnings of a robust HTTP lib that's worthy of a Mozilla-brand browser engine.


Note that that is HTTP; SSL will be likely to come quite a bit further down the track.


That's awesome!


Use FFI to use some C SSL library?

http://static.rust-lang.org/doc/tutorial-ffi.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: