Hacker News new | past | comments | ask | show | jobs | submit login

Atomics vs Adders aren't part of the thread mechanism. They are lower level abstractions around CAS operations, that you mostly only need care about during concurrent coding. I haven't looked at the C# async implementation, but I'd bet it uses CAS operations. Further C# most certainly has similar abstractions around CAS. Akka also is written using CAS abstractions.

Just because there are libraries that provide higher level abstractions around concurrent programming doesn't mean that lower level primitives aren't necessary. In fact, on the JVM due to it's abstraction away from the underlying machine, these sorts of primitives are more needed.




I haven't looked at the C# async implementation, but I'd bet it uses CAS operations.

Yup, the C# library is called Interlocked and it provides a variety of atomic operations. C# also has ref, which allows variables to be passed by reference, which greatly increases the power of the library.

I find Interlocked absolutely essential for getting maximum performance -- like in implementing lock-free data structures.


If you are doing concurrent programming right you don't need compare and sweep synchronization because you aren't sharing memory across different threads. One must use atomics in Java actually quite a bit, like if you wanted a sane counter. Sharing memory across threads in Java is currently unavoidable, even when there are great libraries around. For example any kind of UI or audio or file reading and writing, or timers require having to deal with low level threads.


Every concurrent system in the world has shared state. If nothing else being able to signal that you are done (or yielding) is shared. Many common concurrency patterns get around a lot of the logic problems in concurrency by not sharing mutable state, typically in the form of message passing patterns. But how do you suppose those messages are passed? Via shared state of course.

That is when having good primitives around compare and swap becomes important. Adding these primitives makes implementing those higher level abstractions on the JVM possible for people who are not implementing the JVM, that is as libraries.


> If you are doing concurrent programming right you don't need compare and sweep synchronization because you aren't sharing memory across different threads.

By definition, you aren't doing concurrent programming if you aren't doing shared writes.


> By definition, you aren't doing concurrent programming if you aren't doing shared writes.

I don't understand what you mean but I like Robert Pike's take on concurrency:

"concurrency is the composition of independently executing processes"[1]

As a specific example if you don't want code to block while you're waiting for HTTP requests to finish you're going to be writing concurrent code. I don't understand how that involves "shared writes" but maybe you can explain further? I can write concurrent code that shares no memory, and I can print log statements that show me it's executing concurrently, so I think you may be mistaken.

[1] http://blog.golang.org/concurrency-is-not-parallelism


> "concurrency is the composition of independently executing processes"[1]

Yes, and that implies shared writes.

> As a specific example if you don't want code to block while you're waiting for HTTP requests to finish you're going to be writing concurrent code. I don't understand how that involves "shared writes" but maybe you can explain further?

That's not really concurrent code, because there's a clear happens-before relationship between making the request and executing the callback "onComplete", so the original request and the ensuing onComplete continuation is serial.

However, this particular example uses concurrency under the hood to work. You don't know when the request will be ready and you need to execute this onComplete and the next specified onComplete (if multiple callbacks are specified), so under the hood you need a shared atomic reference and synchronization by means of one or multiple CAS instructions, which also imply memory barriers and so on.


I agree with bad_user's points, but I would like to add my own phrasing in support of his points.

Concurrency with "independently executing processes" are not strictly independent. There must be some communication between the processes, otherwise they cannot coordinate to achieve the same task. A typical mechanism is to have a shared queue between the processes as the only point of communication - use of that queue will involve "shared writes".

People build abstractions on top of such mechanisms which hide these shared writes, but they are still there. And that is part of bad_user's point: even though you, yourself, are not actually writing the code for a "shared write", you must call code that eventually performs one.


kasey_junk's point was that you need CAS primitives (and other low-level threading operations) in order to implement higher-level concurrency abstractions like Akka on the JVM.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: