Hacker News new | past | comments | ask | show | jobs | submit login
Java 8 Features (infoq.com)
196 points by ancatrusca on May 30, 2014 | hide | past | favorite | 122 comments



The new Date API will be a big win for new developers. I think it was created by the same person that made Joda-Time, so there's some real world experience behind the new API.

I also like the easy parallelization functions, though as the article indicates they're not suitable for every use case.


Absolutely, good thing they've learned from the java logging disaster. I nerdrage so hard every time I think about it - how can one mess up something so basic, yet so important? Maybe some sunny day, they'll fix that too.


I'm curious (it's a long time since I programmed in Java) - what is the java logging disaster?


Beyond actual capabilities of log4j vs JDK logging, it's not clear that anyone considered how adoption would work. At the time, log4j supported several back versions of the JDK, while JDK logging obviously required you to be on the newest version, v1.4. As a result, if a library wanted to support any older JDKs, they couldn't switch to JDK logging, or had to support two parallel logging frameworks for little gain. And if your libraries are sticking with log4j, what's the advantage to your program of using the JDK logging?

slf4j is the current solution for this kind of problem, a common interface to various different logging solutions.


Logback (next gen log4j): http://logback.qos.ch/


Log4j 2 (next gen logback): http://logging.apache.org/log4j/2.x/


Thanks for sharing this. Curious where you get your java framework/cool api news from? I hadn't heard about this.


Curious, why Ceki Gülcü (Log4J author) is no longer in team?


Ceki left Log4J to develop Logback, I think because he felt like he had lost control of the project.

"For me, starting a new project was a lot worse than just "disheartening". The SLF4J vote was just the straw that broke the camel's back. After putting many many hours of work into log4j, it became increasingly painful to waste time in arguments, where opinions got asserted by the one writing the longest email. Not fun."

- http://mail-archives.apache.org/mod_mbox/logging-log4j-dev/2...


Just about every project used Log4j. When they added logging to the JDK they shipped logging support which was worse than what everyone was already using (log4j), so adoption was very limited.


Which in turn gave rise to the horrors of commons-logging (clogging) attempting to wrap both APIs and magically detect your logging framework with nasty classloader tricks. While slf4j is a very sane and useful wrapper, it shouldn't be necessary -- JDK logging should have been the wrapper API providing an SPI to plugin implementations (e.g. log4j, logback, etc).


Right - fixing it in that case means "do the same as thing as with Joda, but do it with slf4j and logging instead".


At this point, it is attempting the close the barn door after the horses have run half way around the world. Too much many systems have been built up on clogging and SLF4J. "Fixing" it now just add a third wrapper to the mix would not be a Good Thing(tm), IMHO. Sadly, it's as fixed as it is ever going to get at this point.


The best part is that these classes are available for use in other JVM languages. I'm more likely to use these from Scala than I am from Java.


I was just thinking the same thing. I'm more likely to use them in Clojure. Pretty cool.


I can't wait to do code reviews where people are sprinkling parallelSort() all over the place, not understanding the consequences.

Serious question: isn't this something the JVM could abstract away?


Good point. There is definitely a challenge of knowing when a single-threaded collection or stream operation may be preferable to the parallel option. When my colleague wrote his summary of Java 8 [1], he wrote:

Returning to the concept of parallel streams, it's important to note that parallelism is not free. It's not free from a performance standpoint, and you can't simply swap out a sequential stream for a parallel one and expect the results to be identical without further thought. There are properties to consider about your stream, its operations, and the destination for its data before you can (or should) parallelize a stream. For instance: Does encounter order matter to me? Are my functions stateless? Is my stream large enough and are my operations complex enough to make parallelism worthwhile?

The author of the linked InfoQ article (OP) cites that same dilemma by explaining the potential for context-switching overhead to counter the advantage of splitting the work.

Abstracting it away with rough heuristics might be possible, but doing so with consistent success could be challenging. In other words, you could elect to use the serial algorithm for small collections, or when the CPU contention at the start of the sort operation is low. But if the comparison operator is expensive, CPU contention is volatile, or if operating on a stream of unknown length, the abstraction may choose poorly. Ultimately, I like the option to choose for myself, but like you, I wouldn't mind having a third option that defers that choice to some heuristic.

[1] http://www.techempower.com/blog/2013/03/26/everything-about-...


Good point. Something like

    Arrays.sort(arr, Concurrency.LEVEL)
might have been more flexible. Do I want to use the GPU if available? With which priority should it run, max. performance or more as a background task?


GPU won't handle the comparable interface at all. And the memory transfer CPU<=>GPU would be a true killer. It may work on Direct ByteBuffers only but that's entirely a different subject.


Is that still true with the new unified memory architecture (hUMA) that AMD introduced? I agree GPUs probably can't handle more complicated comparators well, but Arrays.sort() could optimize at least for the primitive types.


hUMA is brand new and no actual support but in order to work properly the CPU has to stall waiting for the GPU.

GPU should be interruptible the same way the CPU is, so if the GC decides to move the memory it can actually do so. The memory can be pinned instead, though. The latter poses some side effects with the GC. If the GPU is not on the same die it will have to virtually copy the array as the L1/L2 caches won't be accessible.

Arrays.sort(somePrimitive[]) would be too much of an edge case to optimize for. Overall hard nut to crack. Java8 streams and direct buffers, however, could be a good starting point to perform various operations via the GPU.

Disclaimer: I am really not well versed in the GPU tech.


Reminds me of a funny comment on Stack Overflow:

> This continual creation/termination/destruction of threads is done so often that I wonder where the idea came from. I presume some poisonous textbook is responsible. Sometimes, it seems that the whole SO is riddled with threads that add two integers and then terminate, just so that the 'main' thread can wait with 'join'. God help us :(

I hope parallelSort() just calls sort() if the array is smaller than some threshold.


But if the array is that small it won't matter. The sort() will be quicker, but it will be a trivial difference.


It reuses the ForkJoin common pool and doesn't have a way for you to specify the thread pool, factory, or anything from what I can see. This is where implicit execution contexts in Scala really help as much as people hate implicits. Of course you have to use the tasksupport setter in Scala for parallel collections instead , but at least it's configurable.


It's configurable !

ForkJoinPool forkJoinPool = new ForkJoinPool(2); forkJoinPool.submit(() -> // write your parallel query here ).get();


I think we can all agree that nobody wants to writ code this way.


Not an answer, but I was surprised to note that Erlang also requires you to explicitly choose the parallel version of a function. Might be interesting to see what Guy Steel did with this issue in Fortress.

If you did it often, I guess the JIT could work out what was best the last time few times. Trial and error. :-)


Equally seerious question: Didn't you just answer you own question before you asked it?


No, I didn't. If, for even an instant, you ever buy the argument that a JIT is better for real-world application tuning than static code analysis, you see my point.


Not without knowledge of which functions are pure and which effects are composable, which Java-the-language really isn't set up to provide.


Wish Android would move in this direction. We're stuck in a 1.6 API for the most part.


I was confused by this comment, because the last time I heard Android was on a 1.5 equivalent. But sure enough as of ADT 22.6.0 (March 2014) some Java 1.7 features were added: http://developer.android.com/tools/sdk/eclipse-adt.html


Android is 1.6 equivalent with some of the syntactic sugar of Java 1.7, but none of the Java 7 or 8 APIs (NIO, new date library, optionals, lambdas, etc).

Unfortunately the legal fight between Oracle & Google has really made me doubt that the cool new language features will be coming any time soon to Android.


Google is not going to try and advance Java development. They should just deprecate it and switch to Go. But right now they seem stuck in the middle, not doing either.


> They should just deprecate it and switch to Go

That would be a step backward, we really need generics and a reasonable support for exceptions on Android (or any large scale system, for that matter).

Go is a fine language at the system level or to replace Ruby/Python scripts but that's about it.


I hope they don't. Go seems like a mediocre language at best: in no way obviously better than Java. Given that they already forked the JVM and added lots of custom class libraries, I'd rather they just forked Java properly and renamed it instead, or switched to another JVM compatible language. Go would have no benefits at all.


I really hope they don't. Go is really good for some things, but true object-oriented languages are great for designing and controlling UIs.


Hm, I think the way Go uses interfaces would actually be ideal for manipulation of UI.


Go also has true closures and easy concurrency support, both of which are very handy in UI development.

My beef with Go is weak web-programming and prototyping support and immature libraries. I'm surprised there's been so much uptake in the Ruby/Python communities, which are very strong in those areas, but I guess Ruby/Python are now being used outside their original domains and those use-cases are pretty ripe for a language like Go.


The process control changes might not seem so exciting but they fix some serious pain points, I'm very excited to integrate these into my code.

Optional is another nice addition, I've been creating one myself in all Java projects I've created since I first encountered it in Rust.


My problem with Optional<T> is that it basically makes overloaded Java methods impossible because of erasure. I'm worried that it is going to start getting overused; the JDK developers really only intended for its use in the Streams API.


I was wondering why Optional support which I was so interested in seemed so quickly inserted/poorly thought out.

There's no equivalent of scala's orElse(returning a second optional if the first one is null) and since default methods while useful don't allow adding things to types you don't own you can't use an method like notation for the best you can get is static importing and wrapping a function around it). ofNullable(creating an empty Optional from a null instead of throwing) isn't the default(of which throws is). Also Optional isn't an Iterator which is disappointing since using it with a for loop can be useful if you want to avoid nested closures.

But worst of all it's like they haven't started slowly deprecating most of the standard library(anything that makes sense to pass null to or that it makes sense to return null in some cases according to the api. Option types are arguably the best solution for the null problem at least for statically typed languages but having null still be possible while having Option is arguably the worst of both worlds, in my understanding scala deals with this by almost never using Null except for java compatibility and tries to create Options as soon as possible if calling a java function that may return null.

Basically I think they need to make some sort of standard Nullable and NotNullable annotations, add it to everything in the std library, and have some sort of package annotation that tells the ides to check every call to a Nullable and bug you to wrap it in Optional.ofNullable. Then deprecate those methods with Nullables by java 10 or 11 with strong warnings for ths who call them(or possible forcing a special compiler flag/package annotation to use them).

Even not doing that they are still adding methods that return null(such as Hashtable::computeIfAbsent) instead of Optional. Why for fcks sake?


Could you explain that a little bit more? I'm using to Maybe from Haskell and am curious what might be missing from Java's Optional.


Basically, because of type erasure you cannot have overloaded methods using different Optional types. So test(Optional<A> a); is the same method as test(Optional<B> b);

I think you can define the types to get around it, but that is messy and a PITA. For example, class OptionalA extends Optional<A>.


That doesn't seem so bad to my eye, though. In Haskell terms it just means that any overloaded operators on Optional have to respect parametricity. If you need something more specifically typed then you can just use a new name.

Is there a particular use case?


Wow that's a real pain point. Why would they even do that, is there a use case?


Backward compatibility - you could run an application that uses generics, introduced in Java 1.5, on a Java 1.4 VM. At the time that was pretty amazing. These days, with so many backward compatibility-breaking changes being introduced, it doesn't feel so great.


That was the original plan, but it never worked in my experience. The compiler as shipped required that -source 1.5 use -target 1.5. So hobbling the language this way served no benefit in the end.


Btw, type erasure is only a problem for Java because it supports runtime type information (which is incomplete, because of type erasure), and overloading (which I have no idea why the compiler can't handle). For languages like ML and Haskell, type erasure is no problem, because you can't access the types at runtime anyways.


In my opinion, no it really isn't. You can just create a method with a different name or a static constructing method if you need to overload constructors.

The major pain point with generics is that they don't work with primitives.


Because of type erasure, all Optional<T> are the same type at runtime, so Optional<Integer> can't be overloaded with Optional<Boolean>, because at runtime it will be impossible to tell which method should be caled.


I still prefer using JSR-305 annotations/tools (like Findbugs) to do static analysis. Reification should help Optional<?> become more useful, but why would you ask for run-time checking when in most cases compile-time checking identifies the problems?


Guava has a nice explanation: https://code.google.com/p/guava-libraries/wiki/UsingAndAvoid.... Essentially, it forces you to actively think about the "absent" case when programming.


For those still on pre Java 8, Google's Guava also adds Optional and other useful things like immutable collections.


And now frameworks can finally supply Optionals, since it before only lead to multiple competing implementations.


But is it really better than @Nullable?


I would say in most cases: no.

Due to backwards compatibility concerns, no existing Java APIs that may currently return null can ever be changed to return Optionals instead, so nulls need to be dealt with regardless. Also, there's technically nothing stopping an Optional value from actually being null itself, or from developers calling Optional.get() without checking isPresent(), just like they might currently use a reference without checking for null.

I personally really wish jsr305 had been adopted in Java8, and their existing APIs retrofitted with these annotations.

http://www.oracle.com/technetwork/articles/java/java8-option... describes Optional, and compares it to how in Groovy, the safe navigation and elvis operators allow you to write code like this:

   String version = computer?.getSoundcard()?.getUSB()?.getVersion() ?: "UNKNOWN";
With Java 8 Optionals, this instead becomes:

   String version = computer.flatMap(Computer::getSoundcard)
                            .flatMap(Soundcard::getUSB)
                            .map(USB::getVersion)
                            .orElse("UNKNOWN");
which is not only much longer and arguably uglier, but also forces developers to suddenly have to worry about applying lambdas using flatmap vs map, where conceptually you're really just dealing with method calls.

New languages like Kotlin solve this much better in my opinion by introducing nullable vs nonnullable types. While Java can unfortunately never adopt this due to backwards compatibility issues, @Nullable and @Nonnull will do most of the time, and hopefully we'll see operators like ?. and ?: in future versions of Java.


While I've enjoyed the Elvis operator in Groovy, the point of optional is to make null go away altogether. In Groovy, you are never guaranteed anything isn't null, so the end result is that you will Elvis everything. While this is easier than the good old if statements, you are still paying for the penalty of a billion ifs in bytecode.

With option, you clearly declare what is safe, and what isn't, and the compiler won't let you screw up. You can decide which layer of your code handles the empty case, and lots of unnecessary ifs go away from the bytecode itself, so the app will run faster, and you know when you have to even consider the null case.

Now, doing that in Groovy is rather silly, because your language is dynamic and your types are optional, so all of this compile time safety would not provide any value anyway. Option is just the way you'd solve the problem in the strongly typed way. It's how you handle it in Scala, for instance, and how Haskell would deal with it.

As far as using flatmaps and maps to work with optionals, yes, it's a chore. I'd argue it's borderline abuse of the stream operators, as treating Option as a collection is ugly. That said, putting yourself in a situation where you chain 3 optional getters is also object design from hell, so one should avoid putting themselves in that position altogether.

In Scala, instead of flatmapping single optionals, we often getOrElse(), or use pattern matching as an extractor. Now that's a feature I would like to see Java 'steal' from the Scalas and Erlangs of the world, but I do not see that happening.


> in Groovy, the safe navigation and elvis operators

The elvis op is called the null coalescing op in other languages. [1] Groovy's promoter chose the elvis name to fit in with marketing the Groovy and G-String names.

PHP also uses the ?: symbol but other languages use different ones, e.g. C# uses ?? and Perl uses //

[1] http://en.wikipedia.org/wiki/Null_coalescing_operator


Completely different concepts. @Nullable expresses an precondition indicating whether or not something can be null. One example of its use is by static analyzers such as Findbugs to find misuse to APIs. In contrast, Optional encapsulates the behavior when something is not present (or null) -- very similar to the Null Object pattern [1]. Used together, you can build an API that permits encapsulates the present/not present behavior and ensure that it is not passed a null reference to the Optional instance.

[1]: http://en.wikipedia.org/wiki/Null_Object_pattern


It is a pity that Qt is not mentioned in that article. AFAIK, It is the only widely-used C++ library that implements null objects consequently. Ever wondered why the Qt framework doesn't use any exceptions or C-style error codes? It is solely because of proper use of Null objects.


Given how few tools understand @PolyNull, yes. Better to have one type system rather than two.


Of course it's better than @Nullable. @Nullable-annotated code still compiles your NullPointerException-throwing code, using Optional won't.


Assuming that nobody screws up and assigns null to an Optional return result/variable instead of Optional.empty(), or calls Optional.get() without remembering to check ... in other words, it's very much like @Nullable when using a compiler that understands it, except it causes problems with overloading and reflection as noted above due to generics type erasure.

If you want code that won't compile due to @Nullable violations I think the Checker framework can give you that, or you can use an IDE that flags violations like IntelliJ and just treat any static analysis warning in that category like a compile failure for your own purposes. The nice think about nullity annotations is that the newest JVM languages like Ceylon and Kotlin are building nullity into their type systems, so if you annotate an API in this way, code written in these new languages will know automatically that the reference can be null and the compiler won't let you access it at all until you tested or asserted away the nullness. The upgrade path for Kotlin especially is looking like it could be quite strong, so I think I'll be sticking with @Nullable for now in the hope that later on we get "real" type system integration via newer languages.


I don't think @NotNull/@Nullable go away completely, but the use of optional makes dealing with nullables much easier. I think this is a much cleaner ways to deal with possible nulls:

    public ImmutableMap<String, Long> getSessionTime(HttpServletRequest request) {
      return Optional.ofNullable(request.getSession(false))
        .map(s -> ImmutableMap.of("lastAccessedTime", s.getLastAccessedTime()))
        .orElseThrow(IllegalStateException::new);
    }
If/When HttpServletRequest is updated to support Optional it can return it directly instead of the caller having to do it, and that is when Java will really see the upside.


Hmm, if I understood that code it could be written like this:

    Session s;
    if ((s = request.getSession(false) != null)
      return ImmutableMap.of(....);
    else
      throw new IllegalStateException();
Maybe I'm weird but I find the old fashioned version far easier to read than the new form.


Shifting the null checking to the type system is the biggest win. So, assuming the getSession() returned an optional:

    public ImmutableMap<String, Long> getSessionTime(HttpServletRequest request) {
      return request.getSession(false)
        .map(s -> ImmutableMap.of("lastAccessedTime", s.getLastAccessedTime()))
    }
The above would not work because the call is returning another optional. The null is forced to be dealt with instead of allowing it to lead to programmer error[1]. The construct makes the programmer either call either orElse(), orElseGet(), or orElseThrow(). The programmer could also just return the optional and let the caller deal with it.

Of course this is a trivial example where the programmer is likely expecting null, but many times null can be returned and it is not always clear.

[1] https://code.google.com/p/guava-libraries/wiki/UsingAndAvoid...


StampedLocks have been talked about extensively at least on jsr-166 mailing list and they required adding load/load barrier in sun.misc.Unsafe. That was the prime reason it needed Java 8, as previously it required a no-op CAS on the load path.

For most people StampedLock will remain quite a mystery as they are harder to use and the vact majority of Java developers doesn't actually write (or even use) low-level concurrency primitives.


"StampedLocks have been talked about extensively at least on jsr-166 mailing list"

Isn't that the jsr that designed StampedLock and decided to add it to Java? If so, I would hope it was talked about extensively, but I wouldn't see such talk as a counterexample.


That's the thing: people who are interested in them (StampedLocks) were already subscribed on the list. Like I've mentioned - outside there is close to no interest on low-level primitives, down to CPU memory barriers.


It's a nice performance improvement over ReadWriteLock as anyone who's ever tried it will know that RWL pretty slow.


Yes, RW-Locks have update (CAS) on the read path which prevents them from scaling.


but for many frameworks StampedLock is big progress.


Which companies are running Java 8 already on production?


For already having it in production it's too early, but definitely there are companies that have it in near future roadmap. I know couple of them.


Why do you think it's too early?

My company is running it on production with no issues.


My company will soon


Wouldn't Mandatory<MyType> (or NonNullable<MyType>) be better for Java, since all reference types are nullable (aka optional) already...? Haskell's Maybe monad doesn't meet java's specific needs.

It could check for null in its setValue method, at write-time - more useful than discovering it read-time (though I'm not sure it's worth the abstraction).


In Scala Option[T] works great, as in libraries people stopped using nulls to represent optional values. And it actually works out when enough people are using it. Plus, if you're unsure about a value you receive (whether it's nullable or not), you can always assume the worst and wrap it in an Optional.



Secure random generation is exciting, especially for endpoints.


Java the good parts?


Already been written, though not updated for Java 8:

http://www.amazon.com/Effective-Java-Edition-Joshua-Bloch/dp...


No one sees an irony?)


How so?

Is it no one is talking about new Java features is because Java isn't the hot new web stack or JS library? Or because most big Java users move to new things with the speed of continental drift?


We're a big Java user but apparently faster - our current environment is JavaSE 7 and JavaEE 6. We'll move to JavaEE 7 when there is commercial support for an application server (probably JBoss EAP 7.0.0) and will likely allow Java SE8 at the same time.

We're quite a bit slower at migrating existing applications (except for security issues), but because we code defensively, we are generally backwards compatible.


Re Optional<T>:

If I get this right, the source code of the signature of many (many!) methods will now be twice the size of before, unless the names of your types and variables were already considerably bigger than "Optional<...>" (10 chars).

I hope it's worth this cost.


Yeah, I hope I am interpreting this the wrong the way. But instead of doing

  X.getA().getB().getC().getD()
we have to

   X.flatMap(X::getA)
    .flatMap(X::getB)
    .map(X::getC)
    .orElse("UNKNOWN");  //[1]
Not sure why this is going to be better?

[1] http://www.oracle.com/technetwork/articles/java/java8-option...


That's not a fair comparison, as the flatMaps are eating nulls (really Optional.empty()s) while the naked calls will throw NPEs. The equivalent code for the first case is:

  String result = "UNKNOWN";
  T a = X.getA();
  if (a != null) {
    T b = a.getB();
    if (b != null) {
      String c = b.getC();
      if (c != null) {
        result = c;
      }
    }
  }
Optional (like Scala's Option or Haskell's Maybe) are useful because they force you to reason about "empty" cases. Whereas in typical java code there is no way for the compiler to force to you handle the possibility of nulls (thus leading to NPEs), Wrapping a type in Optional forces you to deal with the case where the result is not present, leading to more correct programs. Plus, it gives you handy tools for dealing with non-present values, like allowing a chain of computation--any step of which may fail--without needing failure checks on every step.


> The difference between this and the old Atomics is that here, when a CAS fails due to contention, instead of spinning the CPU, the Adder will store the delta in an internal cell object allocated for that thread.

Wow man Java's still all in with it's terrible threading mechanism all the while C# has had async, await and parallelism for years, Go is well past version 1 to great acclaim, node.js and libuv are taking over, and Akka has gained enormous popularity and yet real coroutines in Java aren't even on the horizon. Even python 3 is getting coroutines.

Low level threads in Java are so goddamned awful and unavoidable that it makes me want to pull my hair out. And I otherwise love Java. Java would be so awesome with real, native coroutines. Like that should be only thing they should be working on right now.

Edit:

I'm guessing the people downvoting this have no idea what I'm talking about and don't know what coroutines are.

You know I'm starting to think there's maybe an entire generation of Java engineers who have no idea how much easier it is to write concurrent code with other tools and Java has ruined them. I love Java, but expand your horizons a bit.


> C# has had async, await and parallelism for years

For Scala developers, there's Scala Async, which is exactly what C# "async" is, implemented as a library: https://github.com/scala/async - plus Scala's `Future[T]` has a better design than C#'s Task.

> node.js and libuv are taking over

And both suck in comparison with Java's NIO and the libraries that have been built on top of it.

> Low level threads in Java are so goddamned awful and unavoidable that it makes me want to pull my hair out

Low level concurrency primitives are necessary for building higher-level abstractions on top. For example I need low level concurrency primitives for implementing a Reactive Extensions (Rx) implementation that does back-pressure: https://github.com/alexandru/monifu/

> the people downvoting this have no idea what I'm talking about and don't know what coroutines are

You're assuming too much.


plus Scala's `Future[T]` has a better design than C#'s Task

Why? The one issue I have with Task is the aggravating SynchronizationContext/continueOnCapturedContext. ConfigureAwait(false) really should have been the default. Otherwise, my experience with both seems about equivalent.


You're getting downvoted because your post is "middlebrow dismissal" (https://news.ycombinator.com/item?id=5072224) which is also unnecessarily aggressive. It's quite possible for someone to know exactly what you are talking about, and still downvote it.

Regarding the technical points, having low-level mechanisms for fine-grained synchronization between threads (such as these) does not prevent you from also providing high-level mechanisms for parallelism and concurrency. In fact, it enables others to develop such abstractions in that language in question, without relying on the language itself to provide them.


> unnecessarily aggressive

Aggression that's really the result of thread frustration in Java. I hadn't read about middlebrow dismissal before, I can see how my post came off wrong. I'm glad I posted what I said however, because I the responses taught me new things.


I think you can be aggressive (heck look at Linus) but it has to be substantiate by in detail knowledge and arguments. If you are aggressive then make wrong statements (co-routines not sharing memory) then well you'll get downvoted. I guess take that as life advice in general.


> If you are aggressive then make wrong statements (co-routines not sharing memory) then well you'll get downvoted.

My point was that a programmer shouldn't need to share memory to write concurrent code. That statement is not wrong.


I think what people are taking exception to, is the difference between abstraction users (programmers who are writing concurrent code) who should be able to use high level concurrency idioms to hide their shared state and abstraction creators (programmers who are writing high level concurrency idioms) who need concurrency primitives to make their abstractions efficient.

If we didn't have concurrency primitives, the only high level concurrency idioms we would have would be the ones that made it into the JVM, which is a very slow process.


> That statement is not wrong.

Yes it is. Unless you use Erlang or OS processes you are sharing memory. Or rather you not sharing it any more or less than all the other technologies you listed.

But I agree with your sentiment in general that shared memory and concurrency are not working well. That is why it is worthwhile learning Erlang if you want to be reliable, fault tolerant concurrent systems.

Also nothings stops your from using queues and threads so data is local to each thread and gets copied over the queue.


> Unless you use Erlang or OS processes you are sharing memory.

Actually, even actor-based systems share memory. If two actors A and B send a message to an actor C and expect a response from it, they are sharing memory: what's in C's state. Which can be different depending on whether C received A's message first or not.


> If two actors A and B send a message to an actor C and expect a response from it, they are sharing memory: what's in C's state.

Ok in that respect there is just one big pile of shared memory in the whole world, isn't it (maybe except for military air-gaped system). It is the equivalent of saying if A makes an HTTP post to server C the it shares memory. Well ok, I am not sure what you mean by "shared memory", usually it means living in the same heap. So can access it via a pointer or reference.


Atomics vs Adders aren't part of the thread mechanism. They are lower level abstractions around CAS operations, that you mostly only need care about during concurrent coding. I haven't looked at the C# async implementation, but I'd bet it uses CAS operations. Further C# most certainly has similar abstractions around CAS. Akka also is written using CAS abstractions.

Just because there are libraries that provide higher level abstractions around concurrent programming doesn't mean that lower level primitives aren't necessary. In fact, on the JVM due to it's abstraction away from the underlying machine, these sorts of primitives are more needed.


I haven't looked at the C# async implementation, but I'd bet it uses CAS operations.

Yup, the C# library is called Interlocked and it provides a variety of atomic operations. C# also has ref, which allows variables to be passed by reference, which greatly increases the power of the library.

I find Interlocked absolutely essential for getting maximum performance -- like in implementing lock-free data structures.


If you are doing concurrent programming right you don't need compare and sweep synchronization because you aren't sharing memory across different threads. One must use atomics in Java actually quite a bit, like if you wanted a sane counter. Sharing memory across threads in Java is currently unavoidable, even when there are great libraries around. For example any kind of UI or audio or file reading and writing, or timers require having to deal with low level threads.


Every concurrent system in the world has shared state. If nothing else being able to signal that you are done (or yielding) is shared. Many common concurrency patterns get around a lot of the logic problems in concurrency by not sharing mutable state, typically in the form of message passing patterns. But how do you suppose those messages are passed? Via shared state of course.

That is when having good primitives around compare and swap becomes important. Adding these primitives makes implementing those higher level abstractions on the JVM possible for people who are not implementing the JVM, that is as libraries.


> If you are doing concurrent programming right you don't need compare and sweep synchronization because you aren't sharing memory across different threads.

By definition, you aren't doing concurrent programming if you aren't doing shared writes.


> By definition, you aren't doing concurrent programming if you aren't doing shared writes.

I don't understand what you mean but I like Robert Pike's take on concurrency:

"concurrency is the composition of independently executing processes"[1]

As a specific example if you don't want code to block while you're waiting for HTTP requests to finish you're going to be writing concurrent code. I don't understand how that involves "shared writes" but maybe you can explain further? I can write concurrent code that shares no memory, and I can print log statements that show me it's executing concurrently, so I think you may be mistaken.

[1] http://blog.golang.org/concurrency-is-not-parallelism


> "concurrency is the composition of independently executing processes"[1]

Yes, and that implies shared writes.

> As a specific example if you don't want code to block while you're waiting for HTTP requests to finish you're going to be writing concurrent code. I don't understand how that involves "shared writes" but maybe you can explain further?

That's not really concurrent code, because there's a clear happens-before relationship between making the request and executing the callback "onComplete", so the original request and the ensuing onComplete continuation is serial.

However, this particular example uses concurrency under the hood to work. You don't know when the request will be ready and you need to execute this onComplete and the next specified onComplete (if multiple callbacks are specified), so under the hood you need a shared atomic reference and synchronization by means of one or multiple CAS instructions, which also imply memory barriers and so on.


I agree with bad_user's points, but I would like to add my own phrasing in support of his points.

Concurrency with "independently executing processes" are not strictly independent. There must be some communication between the processes, otherwise they cannot coordinate to achieve the same task. A typical mechanism is to have a shared queue between the processes as the only point of communication - use of that queue will involve "shared writes".

People build abstractions on top of such mechanisms which hide these shared writes, but they are still there. And that is part of bad_user's point: even though you, yourself, are not actually writing the code for a "shared write", you must call code that eventually performs one.


kasey_junk's point was that you need CAS primitives (and other low-level threading operations) in order to implement higher-level concurrency abstractions like Akka on the JVM.


You have librarires such as https://github.com/kilim/kilim or http://docs.paralleluniverse.co/quasar/ for Java lightweight threads


Yeah there are some nice libraries, my preference is Akka, but I think Java needs real native coroutines:

https://en.wikipedia.org/wiki/Coroutine

https://en.wikipedia.org/wiki/Coroutine#Implementations_for_...


Akka doesn't have native coroutines / lightweight threads, it relies on regular JVM threads. it does reuse them for Actors but it dons't do the same things as Qasar (user level threads via bytecode instrumentation). I'm a big Scala and Akka fan, but thought it's worth noting.


Quasar also relies on JVM threads. Akka's lightweight actors are just as native as Quasar's lightweight threads.


(main Quasar author): Akka actors are nothing like Quasar fibers (or even Quasar actors). Quasar provides true lightweight threads - though they're not "native" to the JVM, they're as real as Erlang's processes or Go's goroutines - while Akka actors are a formal way to organize asynchronous callbacks.


Akka is a library not an extension of the JVM so it would not be able to provide native features to Java. I didn't know that Quasar did more. One is able to write much better concurrent code with Akka than with vanilla Java but I think you can still run into threading issues where Java makes this unavoidable, like when playing with audio.


Adders were easily implementable and readily available pre-Java8, basically they need the new JMM from java5 on (i.e 10+ years). Hence, no really a hot topic, yet very nice to have. However any serious project already had something of the sort.

On the other topic: async/coroutines/friends etc have nothing to do with "adders". The latter are a low-level concurrency primitives that enable fast counters and the like. Feel challenged to write a good a simple counter (like page hit) with co-routines or sync.


First, Java has fibers[1]. Second, Adder has nothing to do with that. It's a low-level concurrency construct, that many languages would be happy to have. It doesn't matter if you use kernel threads or fibers, concurrent data structures are a great tool, and Java leads the way in that area.

[1]: http://blog.paralleluniverse.co/2014/05/01/modern-java/


>node.js and libuv are taking over, and Akka has gained enormous popularity and yet real coroutines in Java aren't even on the horizon

There is something like that for Java

http://docs.paralleluniverse.co/quasar/

Android has async-like abstractions in its API. But I like threads better anyway.

> Wow man Java's still all in with it's terrible threading mechanism all the while C# has had async

What is so bad about threads? Why do you need co-routines?


> What is so bad about threads? Why do you need co-routines?

You aren't forced into sharing memory across parts of your program that have no business sharing memory. In Java you can easily end up in a situation where you can look at a program and not know what thread one line is executing in versus another line in the same file. The same instance of a class may be referencing one of its property in one thread or another, and that's sharing memory across threads. A simple if statement may fail you when you share memory across threads because after the if statement checking shared memory evaluates the underlying value for that property it may have changed before the next lines after your if statement executes.

Coroutines as they exist in Go or C# help programmers write code that don't share memory across threads. Akka does this, try it, it's amazing. Java needs native support. Threads are shit. Sharing memory across threads is a nightmare.

Edit: If you're going to downvote, explain why I'm wrong.


> You aren't forced into sharing memory across parts of your program that have no business sharing memory.

I think you are confused (but I didn't downvote you, someone did and then also downvoted my post, was it you ;-) )

Anyway. Unless you use Erlang or fork OS processes you will be sharing memory between your concurrency units.

So learn Erlang it will do you good

http://learnyousomeerlang.com/content

You can build java threads with queues and that works fine. You can shoot yourself in the foot with Go or node.js (probably more so).


You can also have separate heaps per thread in Java if you use a JVM that supports it, such as Avian: http://oss.readytalk.com/avian/


Coroutines also share process space, they're just deterministic so you don't need locks as frequently. The real benefit is lower overhead in terms of memory use, creation and task switching.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: