Hacker News new | past | comments | ask | show | jobs | submit login

Sorry if I'm missing something obvious, but how do these locks stop a deadlock from happening? To explain, say that I have some code that locks A, then locks B, then does something, and another bit of code that locks B, then locks A, then does its thing. There's a race condition that can happen where one thread has A and the other thread has B.

This has bitten me so much that I avoid fine-grained locking when possible and instead use a single global lock everywhere, and have new threads start in the locked position, and then only selectively unlock the bits of code that I can prove are thread-safe, or adding fine-grained locks only where absolutely necessary as shown by benchmarking.




> Sorry if I'm missing something obvious, but how do these locks stop a deadlock from happening?

This is the difference between a data race and a race condition. We can't generally prevent deadlocks, I would imagine that's (like all race conditions) is an unsolvable problem at the language level. Maybe some PhD will prove me right or wrong, though...


Preventing deadlocks is not an unsolvable problem. Linear-session-typed process calculi are guaranteed deadlock-free and race-free. (The connection with Rust's ownership type-system makes me wonder if there might be some way to backport this guarantee to Rust, but I suspect the connection doesn't go far enough.) This comes at a price, however: certain process topologies are impossible to construct, and there's no way to ask "give me the first message to arrive on either of these two channels" (no Unix `select()` analogue). The latter in particular bugs me, because `select()` is a necessity in practice.

In general, problems of "guarantee property X" can almost always be solved with a sufficiently advanced type system, but at the cost of (a) inventing and using a sufficiently advanced type system and (b) disallowing some perfectly good programs.


> Linear-session-typed process calculi are guaranteed deadlock-free and race-free

While this may be true for certain restricted systems, this claim is misleading. Session types, as invented by K. Honda and refined by numerous others, guarantee deadlock and race freedom only within any given session. Session initiation (basically messages to replicated inputs) can deadlock and race in many (most?) session typing systems. The decision not to guarantee the absence of deadlock at session initiation points is what makes session types so simple.

It is possible to restrict session types so that they are completely deadlock-free and race-free, but -- as far as we know -- that guarantee comes at the heavy price of making the typing system completely inexpressive, so you cannot type many natural forms of concurrent programming. Alternatively you can give up on simplicity and make the typing system very complicated (e.g. the system pioneered in "Strong Normalisation in the Pi-Calculus" by Honda et al, and its successors). The former is a real problem, as witnessed by the continuing inability of the session typing crowd to get a handle on message passing in Erlang.

Finding a system that is expressive, simple and guarantees deadlock-freedom is a major open research problem (and I suspect infeasible).


I mostly know about the linear logic side of session types; I'm not super familiar with the older line of research starting with Honda. I'll have to read up on it!

I'm not sure what the equivalent in my world of "session initiation" is. If by "replicated inputs" you mean what in linear logic is expressed by the "!" connective (often called "replication"), it's definitely possible to have ! without deadlocks or any observable nondeterminism. See Caires & Pfenning, "Session Types as Intuitionistic Linear Propositions", for example.

I already listed a few of the limitations of this approach (limited process topologies, and no `select()` analogue). I'm not sure whether these qualify as "completely inexpressive", but I do think the lack of any `select()` analogue is pretty limiting in practice. I'm not sure how hard it might be to remove that limitation, but it wouldn't surprise me if that were a major sticking point. Still, overall I think I'm more optimistic about the possibility of capturing most reasonable forms of concurrent programming in a deadlock & race-free manner.


Honda's pioneering papers on session types (and on everything else for that matter) are always worth reading.

Yes, replicated input is !x(v1).P in pi-calculus, and Milner was directly inspired by linear logic's replication.

> Still, overall I think I'm more optimistic about the possibility of capturing most reasonable forms of concurrent programming in a deadlock & race-free manner.

I don't think you can, or indeed want to excise all forms of race condition from concurrent programs, for if you do that, there would be little concurrency left. You could only do deterministic concurrency, which is not what you want in many cases, e.g when you use concurrency to mask latency. For example how would you define a lock or mutex if all races are forbidden? You need race conditions, but you want them in homeopathic doses, you want them constrained and tightly controlled.

I also doubt that concurrent programming patterns like dynamic deadlock detection and removal are amenable to a type-theoretic analysis in linear-logic based systems such as Caires & Pfenning's.


The safe subset of Rust (which is more-or-less linearly typed with elided drops) is indeed deadlock-free. Features like mutexes are written using the unsafe sublanguage because shared memory concurrency is useful. Because potential deadlocks are opt-in rather than opt-out, it's not unreasonable to imagine that future extensions to Rust (or some other language in the same mold) could bridge this gap in a practical way.


This is a bit of a misunderstanding of what Rust's safety guarantees are and what 'unsafe' actually means in Rust.

Rust safety guarantees are based around a few rules involving references, ownership, lifetimes and borrowing. None of them forbid the creation of mutexes or anything else which you will find in Rust.

'unsafe' Rust code shouldn't break these rules any more than the 'safe' code does, the only difference is that the compiler lets you do something which it doesn't know is safe. It's up to you to design the code such that it is.

If code marked as 'unsafe' does something which actually violates one of the assumptions on which the Rust compiler works, then the safety of the whole program is jeopardised.

(There was actually a bit of bike-shedding surrounding possibly renaming the 'unsafe' keyword to better reflect these facts, which included suggestions like 'trustme' and 'yolo', but a satisfactory term was not found)


You are correct. When I refer to the safe sublanguage, I mean the language without any unsafe blocks. Since so much of Rust is defined in libraries, precisely what is allowed in those unsafe blocks is a fairly important detail; they can violate semantic guarantees that "pure" safe Rust provides. In this case, the Rust core team made the deliberate choice (which they have discussed on occasion) not to consider deadlocks unsafe behavior, because they were not able to get it to work to their satisfaction. As another example, there are fairly subtle implications for concurrency around the fact that Rust provides (using unsafe code) a swap function for mutable references. Swap cannot be written in the pure safe sublanguage. Yet another example: in pure, safe Rust, memory leaks are not possible (except via premature abort). This is not true of Rc, hence Rust does not guarantee freedom from memory leaks.


FWIW, transitively-safe Rust is quite uninteresting as an actual programming language, since you can essentially only use some arithmetic and simple user-defined data types. There's no IO and no core data structures like Vec: `unsafe` is designed to be used as a building block to create safe interfaces and the distinction between safe things implemented in the language itself and safe things implemented in libraries (especially the standard library) is fuzzy... and even then, the compiler can be regarded as one huge `unsafe` block.

Of course, what you say is perfectly true, the Rust language itself has no threads or concurrency, and hence is automatically dead-lock free.


Yes, it's somewhat banally true, but it's an important subset of the language because the guarantees of transitively-safe Rust represent a baseline for what you can ultimately guarantee in the language. Work like Patina (ftp://ftp.cs.washington.edu/tr/2015/03/UW-CSE-15-03-02.pdf) is useful for that reason. Transitively-safe Rust may also independently fulfill a useful role in the language; e.g., it is a candidate for the subset acceptable in constant expressions.


Ah, thanks for that, I'll look into it.

I guess what I was trying to get at is that 'race conditions' can fall under the same kinds of errors as logic errors. This is the way that people usually attack the phrase "if it compiles, it works."


I'm not sure what you mean when you say that race conditions can be the same kind of error as logic errors.

I would define a race condition as "an error due to nondeterministic concurrency". If you ban nondeterminism, for example, race conditions are impossible. In practice, nobody wants to ban nondeterminism, so races creep in. There might be smarter ways to eliminate races (ways of only allowing benign nondeterminism), though. So I'm not sure that practical languages without race conditions are an unachievable goal, just a difficult one. Give it another 20 years and we'll see where we're at. :)


I think I'm just being imprecise with my terms, since this is just a comment. I've seen people describe race conditions in practice as "most of the time, message A comes in, then B, but I didn't write it correctly, so in certain cases, B comes in, and then A." Which I would casually describe as racy, but as you point out, would be more accurately described as a pure logic error.


Deadlocks can be statically prevented by ensuring that all code paths acquire locks in the same order. However, Rust's type system is not capable of enforcing that by itself (except maybe in some very limited cases). Some other substructural type systems (e.g. http://en.wikipedia.org/wiki/Substructural_type_system#Order...) might be up to the task, but I'm far from an expert in this area.


Ah thank you! This is super neat.


That said, the ergonomics of the mutex design and the ownership system would probably eliminate most of the race conditions I've seen. Lock ordering can be effectively established by nesting mutexes: e.g. `Mutex<(X, Mutex<Y>)>` allows you to establish that any locking of Y has already locked X.


Yeah, I've thought about this approach in the past. One problem is that it requires you to acquire the same locks in the same order. That is, if you had Mutex<(X, Mutex<(Y, Mutex<Z>)>)>, you couldn't acquire X, then Z, without first acquiring Y, but it would be perfectly safe to do so.

(Although, come to think of it, you can probably work around that with some cleverness; e.g., you could provide the option to access either the data protected by the Mutex or the next Mutex in the sequence through &mut; whichever you chose, you couldn't access the other until you were through with the children. You still have to keep the mutexes structurally in a linked list, though, which is obviously not ideal for all situations. And you still need to deal with the fact that the reference to the first Mutex must be an &-reference, hence can be replicated indefinitely, which allows you to subvert the ordering guarantees; but this was already a problem, and I think there are some clever ways around this using rank-2 lifetimes).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: