Hacker News new | past | comments | ask | show | jobs | submit login
Features of Project Loom incorporated in JDK 21 (java.net)
222 points by philonoist on Aug 15, 2023 | hide | past | favorite | 120 comments



Congrats to Ron Pressler (pron at HN) and the team. Who worked on it for so long, ever since the Quasar library.


Nice, Project Loom is often summarised as just incorporating the main feature of Virtual Threads however it will also add a whole bunch of other features or improvements for doing tasks concurrently and not having to worry about blocking IO or race conditions like you would have in the past.

Some useful code examples are here to play around with https://github.com/nipafx/loom-lab


I never really "took" to Java, but are all projects this boilerplate / nested?

`src/main/java/dev/nipafx/lab/loom/disk` for example seems like alot of directory for just getting some source files organized.


Yes, they all are.

First you have `src/main/java` (compare `src/test/kotlin`), and then the package is encoded as nested directories, and Java has a convention of using the domain name as the root of your packages. So, if your company domain is "sun.com", all your packages start with "com.sun", and then you have more specifiers under that.

But all IDEs and github condense empty directories together, so it's not that bothersome.


> But all IDEs and github condense empty directories together, so it's not that bothersome.

Yes in most tools that's quite transparent. Every once in a while I use the GitHub mobile app (which is not doing that), I'm reminded how bothersome it is.


src/main/java is a convention that started with Maven; it's simple to change in the project build file (pom.xml). Otherwise, it's due to the fact that a package namespace in Java must correspond to a directory tree.


You'd think after all the engineering that went into Java they would solve it to allow arbitrary directories and such.

I don't know why, but this bothers me for some reason. To the point that I (perhaps irrationally, I admit) couldn't get "into" learning the language.

Same deal with $GOPATH in Go, until vendoring became properly supported I just hated it.


> allow arbitrary directories and such

OH PLEASE NO.

You generally don't write Java using bash/cd/vim; the IDE presents classes and package paths and puts files where they need to be. "Where is my code" is not something you need to explicitly think about.

In contrast, javascript/typescript files are often poorly organized and it's frequently necessary to look for code via grep-like tools. Yes sure, someone could organize js/ts files well, but it actually takes mental effort and coordination among the whole team.

Java was heavily inspired by smalltalk, which IIRC didn't give you a "filesystem" view of your project - it was an integrated environment with a proprietary storage format. Also IIRC early versions of IBM's VisualAge for Java maintained this paradigm. I think the "everything is individual files, but the IDE maintains them" was actually a pretty good compromise.


Java is not alone in this, turns out some conventions are great in large scale projects.


Simplifying conventions a bit here couldn’t hurt is another way of looking at it. Maybe instead of the long src directories it could be simplified to app and tests top level directories for example.


For a simple HelloWorld or a PoC development a flat list of .java files would work, but once you go past that (20+ files or so) a standardized directory structure is an important element of managing the complexity.

And using the JAVA ecosystem standard makes it easier for fellow/future developers to continue the support and expansion


you don't have to buck conventions, as much as allowing for alternate ones, is all.

There's room for a reasonable alternative that chops the directory structures into more reasonable nesting, for instance.


What is the proposed alternative exactly? I like main vs test. I like isolating java files from other languages or templates. I like the src directory bucketing sources from other root project build files and output artifacts. I have a ton of wishes for java improvements but this is functionally not an issue in day to day work.


True, however since already the Visual Cafe, JBuilder and Visual Age days, Java IDEs have tried to provide a Smalltalk like experience, given where the community was coming from, so the concept of a virtual image mapped into the file system has always been present.

As such, code browsers always made this quite easy.

The fact that Eclipse provides a Smalltalk like code browser isn't an accident, rather its Visual Age roots.

It is only a problem when one makes the point to navigate through the source, outside Java tooling.


C# is a very similar language with some of these restrictions loosened if you want to try that route.


Funnily enough, I did enjoy C# development once I worked around this sort of thing using package references


This is indeed supremely irritating. Besides this deep nested directory structure every 'public class/interface/enum/record' have to in their own file. This whole thing creates tons of files and directories with trivial amount of code.


Every top-level public(ish [1]) type declaration requires its own file. It may be irritating but it also brings some clear advantages for compilation speed. You can be irritated by many small files or by longer compilation times.

When you compile a file, the compiler quickly finds all other file dependencies and compiles (or just parses) only them, i.e. given a single file, the compiler knows exactly where to find all the declarations of all the types used in it. This property may not matter for languages that don't have good separate compilation anyway -- and few languages do, which is why people coming from other languages may find the restriction weird -- but Java's separate compilation is excellent. That turns out to also greatly help in-memory compilation for an upcoming feature where we can choose to compile files lazily, on demand: https://openjdk.org/jeps/8304400

To see that that is, indeed, the reason for the restriction, notice that it is only required for public(ish) top-level types. Non top-level public types can also be easily located, and non-public top-level types that are stored in files that don't match their names can only be referenced by code in that file (more precisely, the compiler is allowed to emit an error if they're referenced elsewhere). See the bottom of §7.6 of the JLS: https://docs.oracle.com/javase/specs/jls/se20/html/jls-7.htm....

Anyway, the point is that the restriction is there for a good reason. We sometimes entertain removing that restriction (and maybe someday we will), but the compilation benefits have so far turned up to be quite useful. We like our efficient separate compilation -- I don't think any other mainstream language does it as well as Java -- and may be able to get even more out of it.

[1]: Really, any top-level class that is referenced by name from other files.


those are features

not for the compiler, but for the poor souls than will not need to hunt where the hell that structure is defined


I like the Java conventions with all the files belonging to a package in individual nested directories of their own. It makes things systematised and easier to sort in my head when dealing with large projects with thousands of source files.


you can put your classes in default package and source files in root dir "/Main.java", but as it stands most of industry/tools is following conventions that make life easier for people that need to maintain the code.

but yes namespaces must match directory structure.


There are some fairly major Java backend libraries that are founded almost entirely on async-type APIs. Eg, Vertx, Quarkus, Micronaut, etc.

I am interested to see how a widespread move to virtual threads, which for them is somewhat existential, affects their futures (pun intended). Also what backend libs that are founded on virtual threads emerge and supplant them.


In principle, reactive and callback-style libraries could eventually use Virtual Threads as backend as well, but I expect for them to fall out of fashion. After all, they were introduced because the existing threading support in Java did not adequately support massively-concurrent applications.

There will surely be new frameworks that will take advantage of Virtual Threads and will eventually require to run on the first Java LTS version with Virtual Threads support. Other projects might offer Virtual Threads as a backend, maybe even by default, but will continue to support traditional backends for the foreseeable future, e.g., Apache Tomcat.


Not sure about now but a few years back the company I worked for was heavily vested in Finagle [1] using Future<T> pools. I'm sure virtual threads would only enhance this framework. Also, Spring and it's reactive webflux would probably benefit as well [2].

[1] https://twitter.github.io/finagle/

[2] https://docs.spring.io/spring-framework/reference/web/webflu...


Just to add some context, Micronaut 4 supports VTs (via the blocking scheduler), while Quarkus runs on Vert.x, which in turn runs on Netty. There’s also Helidon, in which Nima has become an alternative to Netty as a server.

This topic was also discussed by the Spring/VMware guys behind Reactor and Spring WebFlux, which also runs on Netty rather than Tomcat. So in this scope it’s interesting to see how JDBC and Tomcat adapt to VTs. In the end, asynchrony and fibers are different perspectives at concurrency management. What I hope is that most people use them as complementary approaches.

Lots of companies have used blocking Tomcat underneath and we’re content with it. With VTs they just got a free opt-in performance upgrade. Reactive streams have been perceived by many community members as something overly complicated. So VTs will most likely be warmly welcomed as soon as they trickle through to the production code bases. Most of the team leads I spoke with recently expressed a happy grin when talking about VTs in the context of their blocking codebases.


I actually used Jetty for a recent project because they were one of the first to integrate support for running handlers on virtual threads.


It is often very difficult for old established frameworks, libraries to move to new/improved ways of programing language. Besides usual reason of downstream breakage, technical challenge, timing etc, critically, they reflect project founders' self-image. So most projects found it more worthwhile to justify current choices at any cost rather than try new things with open mind.


Quarkus is built on Vertx so once that gets upgraded two will support it. https://github.com/vert-x3/vertx-virtual-threads-incubator Vertx is testing it



This was just the proposal, lots of things have changed since then!

To know what Loom is about, read the official Java Magazine post about it (2021 though, not sure how much has changed and not been updated): https://blogs.oracle.com/javamagazine/post/going-inside-java...

Or just the current JEP: https://openjdk.org/jeps/425

More resources on the Project page: https://wiki.openjdk.org/display/loom


Correction: That's the first preview, here's the latest JEP on Virtual Threads: https://openjdk.org/jeps/444


Could you get the Project Page (https://wiki.openjdk.org/display/loom) to update the link to the JEP? It was listing the one I posted above just now...


We aggregate/organize content from the Java team here: https://inside.java/tag/loom


Does anybody know what this will mean for the already existing green threads implementations in e.g. Kotlin and Scala Cats? Will these benefit from project Loom, or are they somehow incompatible?


One of the main authors of ZIO (another effect runtime in Scala) gave a talk about ZIO + Loom.

https://youtu.be/ygOmwze5ETk

For some use cases there seems to be a performance advantage for using Loom if the library properly supports Loom. (Mainly gain it from using plain old boring blocking IO instead of async NIO stuff which has overhead. https://youtu.be/ygOmwze5ETk?t=2396)

And at least according to JDG cats-effect is heavily tied down to their Async means they can't as easily work around the async overhead as ZIO can. Don't know if this is actually true or not. Would need input from actual cats-effect maintainers on this.


Daniel Spiewak has posted a perspective before. The TL;DR is that Loom makes the implementation of CE/ZIO more straightforward, but it probably won't replace the effect systems themselves since they offer a lot more than just "light threads":

https://www.reddit.com/r/scala/comments/sa927v/comment/htsoy...


FWIW, the cats/zio split has historically seen a lot of contentiousness and hostile feelings both ways, for reasons unrelated to their technical differences, so I would take anything negative one side says about the other with a bigger dose of skepticism than usual.


I don't know about Scala, but there are benefits for Kotlin indeed. For example a special dispatcher could be made to schedule each coroutine on a single virtual thread. This would be very handy for coroutines that suspend on IO, and it would be a better alternative to Dispatchers.IO which currently uses a threadpool like strategy. Also it's possible that debugging and profiling is improved given that each virtual thread has its own stack. And you can still take advantage of all the nice things in Kotlin for structured concurrency, flows, etc.

On the other hand, Kotlin already committed to a colored function approach, and it will likely remain. Java on the other hand, will not need that, which is nice.


Yeah, with this I'm a bit unsure if using coroutines in Kotlin will be worth it?

Coroutines will probably benefit from Loom, but will it be worth writing stuff using coroutines vs just using fibers/loom and write it the "old" way? Or are there other benefits to be had with coroutines?


It's pretty hard to use Kotlin without co-routines these days. Most modern Kotlin frameworks tend to use them. And they work outside the JVM as well.

Mostly we're talking about Java projects focusing on doing things the Java way that may or may not be using Kotlin for that. Kotlin mobile is unaffected; it won't get Loom any time soon (as far as I know) and things just don't work the same way there. Kotlin (and coroutines) are the go to solution there. I don't see that changing.

And from having used coroutines with Spring extensively, I kind of like it. Works great. Makes everything easier. Especially the unholy mess that is web flux.

There aren't a lot of things that loom does that Kotlin doesn't do already. Green threads. Check. Structured concurrency. Check. At best it will do them a little faster with loom in place, which isn't a bad thing of course. But it doesn't really fix a problem I have. Being able to use threaded and blocking IO things in a non blocking way with no threads is going to be nice. Great if you are stuck with a lot of legacy stuff.

Roman Elizarov addressed a few things regarding Loom at the last KConf: https://www.youtube.com/watch?v=zluKcazgkV4


> It's pretty hard to use Kotlin without co-routines these days.

Hmm, what for? GUI, Android? I've mostly used Kotlin as a java-alternative in huge orgs, where we're married with Spring and lots of IO blocking stuff. Seen no issues with coroutine-less Kotlin code.


Android, IOS (with e.g. kmm, or compose multiplatform), browser (with kotlin-js and react, fritz2, kweb, or several other popular frameworks). Soon web assembly. And there are of course lots of jvm based server projects that use some sort of asynchronous framework (Spring web flux, vert.x, netty, etc.) that you can use with co-routines as well.

Basically everything except what you are doing with it. Nothing wrong with that, I've been on such projects too.

Anyway, non blocking IO is pretty common in the jvm world these days. And most of that works great with co-routines. And of course most Kotlin users are still found on Android. Doing modern Android without co-routines is not a thing.


I guess we have to wait to see proper benchmarking. But I suspect that Kotlin coroutines will still have an advantage in scenarios where a ton of suspension/resume is needed, because I think this is probably cheaper in kotlin coroutines. One example is GUI applications. On the other hand, for applications such as network services, where tasks mostly suspend on IO, virtual threads will probably be superior. But as I said, nothing prevents Kotlin coroutines to use them behind the scenes as well.


Structured concurrency isn't finalized yet and I think that's the one big benefit of coroutines in the interim.

Another benefit is that I see no reason why using coroutines today won't benefit from loom tomorrow. It should be pretty simple to retrofit loom stuff onto coroutines.


One high-level benefit of Kotlin Coroutines is that they're multiplatform, so one could write common Kotlin code that could compile to JVM, JS, or Native targets. Virtual threads are only for JVM.


Dunno if anything has changed since last year but this was a very interesting post by one of the authors of Cats Effect - https://www.reddit.com/r/scala/comments/sa927v/how_will_loom...

TLDR; it didn't look like it would impact effect systems much at all and most of the benefits (initially) are to make imperative code more performant. So I guess it closes the performance gap somewhat which is good, but won't be a big deal to anyone already using their own concurrency construct.


Scala can just use the virtual thread pool as default execution context and its beautiful, you can just call whatever you want in a Future


With this initial release there will be some benefits but maybe not full benefits from using loom. That is to say it's easy enough to swap a lot of what's being done for kotlin coroutines or cats with just launching a virtual thread.

Where things will get interesting is when the continuation API finalizes. That will allow those APIs REALLY deep integration with the JVM.


Like any guest language, it is up to them to eventually adopt platform features, or not.


Mostly those will just adapt to it. I can't speak for Scala but with Kotlin, the whole Loom thing is just yet another solution that it can abstract from (there are many already; on and off the jvm) with some extension functions and adapter code.

And since on purpose Loom uses the existing APIs for things like Threads and ThreadPools, creating a loom capable coroutine scope should actually just work without changes. Kotlin co-routines comes with extension functions to create coroutine scopes from thread pools. It's just another thread pool. Of course co-routines won't map 1 to 1 to green threads without further work. But that's more of an optimization than a functional problem.

For that there will likely be some changes over time to the jvm backend to make use of loom when it is available. And maybe even some API changes. Also if you are using spring with some blocking things like jdbc, your life will get a bit easier and you can switch from using the default threaded IO coroutine scope to a loom capable one for those things and use a bit less threads. All good stuff. But it won't massively change what that code looks like. I expect that this stuff will start happening around Java 22, which will be the LTS version where this becomes more widely supported and used.

People forget that kotlin's co-routines work outside the JVM as well. It's one of the multi platform libraries that works on all platforms that kotlin compiles to (jvm, android, js, native, and soon wasm). Of course Android while it uses Java doesn't use a jvm as its runtime and instead has its own runtime with ahead of time compilation and a lot of Android specific libraries and frameworks. So no Loom. I'm not sure if that's ever going to be addressed; or even if there's a big need to do so.

On each platform Kotlin co-routines integrates with whatever is there. Same APIs, same code, completely different backends. And via extension functions you can also interface with existing frameworks like you would find on IOS or Android. Or indeed the JVM which has quite many of these. Native of course has many compilation targets for different OS and processor architecture combinations. Mac, Windows, Linux, IOS (which are several platforms actually), Android, etc. Android native is actually a supported target in addition to regular Android.

We use kotlin-js with co-routines in the browser on top of javascript's promises. It's actually quite nice for that. And it mostly works exactly the same as in the jvm on our spring server. Great for asynchronously calling some APIs and then updating some state. Or launching some background co-routine that does stuff.

The impact of this is limited to server side jvm usage with things like Spring and Quarkus. And mostly it's a positive impact. Makes it a bit easier to deal with some of the older Java libraries out there that are still depending on blocking IO. The rest of the Kotlin ecosystem outside server side jvm is not affected by this at all.


Language-level coroutines derive no direct benefit from Loom but it also doesn't hurt them. The benefit is you can just stop using those language features, or if you already have them in wide use, you can just wrap all your code in runBlocking{} and virtual threads. Also, no work is needed to start using virtual threads because they're a library/runtime feature not something a language has to support. One of the benefits of being a JVM language - free upgrades.

Exception: GUI code. Kotlin Coroutines is deeply integrated into Jetpack Compose. In theory parts of the framework are thread safe but it's not well documented and all the examples / tutorials want you to learn coroutines.


> The benefit is you can just stop using those language features

Not at all. This is like saying "now the runtime supports plus and and minus natively, so you don't have to use a math-library anymore".

Libraries Like ZIO, Cats Effect or Arrow-KT offer a vast amount of functionality that you still have to rebuild with Loom. Just the foundations are different.


Other than cooperative threading: they are also adding a more versatile switch statement, with some form of pattern matching https://openjdk.org/jeps/441

But it still falls short of deconstruction patterns of Scala, though that is something that they want to do in the future (see 'Future Work' section)

i think that Scala is driving a lot of progress: (for example java adopted streams, probably inspired by Scala)


It's great to see this, but it's not going to make me want to go back to Java from Go. In Go, everything is built with this sort of concurrency from the ground up. This makes everything fit together very nicely without the cognitive overhead and dissonance you have to work through with the Java libraries of various vintages and approaches.


Thread per request programming is alive and well in Java. You usually have to go out of your way for callback-oriented versions of things. This will simply make the existing naive code more scalable, and obviate the need for async style programming in new projects that are targeting scale.


Me on the other hand, only code like 1996 if I have to.


At first I misread and wondered what Project Loon[1] could possibly contribute to JDK 21.

[1]https://en.m.wikipedia.org/wiki/Loon_LLC


Balloon factory classes ;-)


any metrics to compare this to say streams or any other OS based parallelisation?


[flagged]


> People still use Java?

In Fortune 500 ? YES , pretty much everywhere.

In Startups ? It's Kotlin or Scala mostly.

JVM is far from being unpopular , but fore sure it's not as sexy as ['React','Deno'].push(...)


You wouldn't say so if you only read HN


As it is continuously in the top 3 most popular languages, yes.


Well it's currently 4th and on a trend down.


Source?


the TIOBE index


That’s the worst possible answer to my question - unless you believe that visual basic is ahead of javascript, as TIOBE once proudly showed.

Honestly, just forget about it entirely, random.org reordering a list of PLs would be more accurate than that.

If you are interested, here is a ranking that has some relevance to the real world: https://redmonk.com/sogrady/2023/05/16/language-rankings-1-2...


if by the real world, you mean the web (lol), then sure


Yes, it's a fairly popular language


Companies do


A good step forward and nice that it's in the next (Oracle) LTS version. But is it too late? NodeJS started cleaning up by pushing async-for-all over a decade ago. Then Go came along and did the same thing but with a simple statically typed language. The Java world developed async server frameworks + GraalVM native image in response and now will be able to incrementally phase them out, but Java lost significant ground server-side because it took so long to identify and respond to these moves by competitors.

The question is now whether Java will regain some of that lost ground, or whether people came for the async but stay for the "isomorphism" (same language in both browser and server) / AOT compilation model.

My gut says it's not too late for them and devs do engage in a kind of continuous rolling evaluation of platforms. Once Java 21 launches people might start to re-evaluate Java vs Node and once Loom is fully supported in native images, you might start to see people re-evaluate Java vs Go at least. Where by "Java vs" I really mean "JVM languages vs" like with Kotlin, Scala and Clojure as well. Or maybe even Loom will be compatible with GraalJS and then you can do fully blocking programming with Node-style efficiency but on the JVM.


I think your view of history is a bit distorted. In my part of the JVM world (Scala) we've been async for well over a decade. The JVM has supported async / non-blocking IO since nio circa 2003 (https://www.jcp.org/en/jsr/detail?id=51). It was definitely not developed in response to Node or Go.

Straightforward implementations of async are not very ergonomic to use without first class functions, which didn't come to Java till 2014 (Java 8), so that may have held back adoption for Java programmers.

Graal Native Image is more about reducing startup time and memory usage (usually for serverless or command line applications) than async.

As a final note, it's amusing that way before nio, Java actually had user-level / green threads but dropped them: https://en.wikipedia.org/wiki/Green_thread


Yes but was NIO really usable for most tasks? Even with things like Netty to plumb it all the way to web server frameworks, databases and so on took a while. Like if you wanted to do SSL+async then a lot of manual work was required before Loom (if you want to use the stdlib).

Native image isn't related to async for sure. I meant more that it's a feature that people like from Go, which has both AOT/static binaries and also usermode threads.


I can't really comment on the usability of nio as I've never directly used it. I've used many libraries and frameworks that wrapped it up into something more usable. My understanding is that the basic system is quite hard to use, and I guessed at one issue that might contribute to low adoption above (lack of lambda expressions in Java, making callback or promise based libraries tedious to use.)


It's not hard to use exactly but it's low level. My point was that you can't just say "give me an async HTTPS server" or "give me an async JDBC connection", whereas because node started from scratch with this model then it's all async-capable. The new JVM Loom model is better though. No question.


What lost ground?

JetBrains makes tons of money selling IDEs written in Java, the biggest selling mobile OS, is written in a mix of Java and C++, and even Kotlin is heavily dependent on the Java ecosystem.

Alongside two other major IDEs, equally written in Java.

Java is so relevant that even Microsoft, despite its past history with Java, has bought jClarity and become a very relevant OpenJDK contributor, where only Java has day 1 feature parity with .NET, when new Azure capabilities are rolled out.

Most enterprise shops when not doing Java, they are mostly doing .NET, not nodejs, except the Web frontend teams.


He said server side, IDE's and Android are not server side. Java dropped the ball and MS, Node, Google ate their lunch.


Meh.

Only for non heavy or non critical tasks.

No one's going to seriously propose using Node or MS for HFT. Or even use javascript to make something as heavy as a videogame streaming server. It would be too slow. Even if you did do something so quixotic as to build an HFT server in Node, no hedge fund or investment bank is going to switch to a Node based HFT communication system. They'd just be agreeing to be noncompetitive.

I guess what I'm saying is that where it counts, I almost always recommend real time libraries in Java or C++. Rust, is not there yet. Hospitals can't risk a patient's life on it yet by having the PACS system, or God forbid, the modalities themselves dependent on the newest Rust release. Or even worse, Node. It's way too risky.

Most people on HN don't know or hear about work in these kinds of fields. So we think Node is a great and effective tool for solving server side problems. Because it solves most of the problems we see. It's not. It's a great easy tool for solving server side problems. And we'll all make far better CTO's in the future if we recognize the difference, and the revenue opportunity extant, between easy and effective.


I'd love to hear more on this, what are some problems we don't see? And what do you mean by effective?


Hello dreamworld, Google and Microsoft output more Java code per day that you can ever imagine.

Microsoft is again a Java vendor, which I mentioned, and you failed to read.

Android is not Java TM, it still Java the language, and depends on Java the ecosystem.

Server side code regardless of the language is written in Java powered IDEs, that many devs gladly pay money for.


My experience is that most Java people didn't switch to other languages nor to async abstractions. So this will not have people revert lots of work, just improve their existing way of working.

If one's only reading HN it may be easy to think of Java on the decline and almost dead. But I've done lots of greenfield java projects (or, mostly Kotlin, but still on the JVM) various places until recently (when I switched job).


Node is taking the entirely opposite approach to Java and Go, so not sure why you're putting them in the same boat.

Basically what Java is doing now, and what Go did when it came out, is to turn the clock back on (explicit) async/await or promises/futures, and return to simpler threading+blocking IO APIs, while retaining the performance benefits of async code. This is achieved by integrating the blocking IO implementations with the thread scheduler, and by using user-space threads instead of expensive OS threads with their large stacks.

Of course, Java has other threading constructs than Go, but the core idea with Project Loom will be similar: you can just do something like

  int[] xs = new int[N];
  Thread ts = new Thread[N];
  for (int i = 0; i < N; i++) {
    ts[i] = new Thread(
      ()->{xs[i]=blockingRead();}
    );
    ts[i].start();
  } 
  for (int i = 0; i < N; i++) {
    ts[i].join();
  }
Or something close to it, and you will get efficient threading even for large N.


> because it took so long to identify and respond to these moves by competitors

This is and has been Java's explicit strategy since the Gosling days. They watch and wait and implement changes at a glacial pace. They let other languages figure out what works, and implement changes only after they've been demonstrated useful and caveats and pitfalls have been thoroughly explored.

This conservative and deliberate change process permits much better backward compatibility and a level of stability that is very rare among programming languages.


And yet, they keep copying Go's (and their own!) mistakes while learning nothing.


Which ones are you talking about?


Loom's focus on fibers/green threads, trying to reuse the Thread API which is as defective as ever.


Please expand on your thoughts. The only defective part of the thread API is the thread group, which nobody ever used anyway.


What?!?


When Java was new it was pretty radical though. At least in the category of mainstream corporate languages. Who else was offering things like an advanced speculating VM, stable/documented bytecode, a huge stdlib, a cleaned up C++ like memory safe language with reflection, sandboxing, applets etc? It was sufficiently radical that MS had nothing like it and felt a need to respond with their own Java impl and later .NET, the UNIX world had nothing like it either and that led Miguel to try and implement .NET for GNOME.

If you compare Java to obscure languages that hardly anyone used outside of the MIT AI Lab then you could argue it was conservative. But that wasn't the space it was playing in.

It feels like this Java as slow-follower idea is actually relatively "new" and more of a response to it slipping from being a relatively futuristic platform in the 1990s to being unable/unwilling to really try new things now.


Java was always conservative on the language front, but much more experimental/state-of-the-art on the runtime one, and that is still true today (e.g. GC-wise no other language is even close to Java’s, and basically every research is against that)


> Java was always conservative on the language front, but much more experimental/state-of-the-art on the runtime one

These two things might not be unrelated; a state-of-the art runtime might be necessary to compensate for weaknesses of the language.

> and that is still true today (e.g. GC-wise no other language is even close to Java’s, and basically every research is against that)

For instance, a language which generates less garbage on the heap (instead of boxing nearly everything) would have less need for an advanced GC.


The other way to see this is that a powerful runtime allows to write more straight-forward code.

Allocation in the heap allows for much more powerful programming constructs as n object owns it’s own memory layout. Back in 1990 the cost of pointer indirection was virtually free. The need for stack allocation came up only somewhat recently. And the value object model Oracle has come up is infinitely more powerful than what comparable languages are offering.

The trend in performance is allocating more jobs to the runtime. In HPC, this is already the de-facto standard.


Having to break less due to regenerative breaking in electric cars doesn’t make advancements in breaking technology obsolete - surely, java will get to value types sooner or later that will decrease the amount of allocations, but most programs do make extensive use of the GC fundamentally, so both advancements will improve the language.


Smalltalk.

Which was hardly experimental in 1996, with IBM VisualAge series of IDEs being written in it, and being the ".NET" of OS/2.

There is a reason why hotspot and most early IDEs trace back to Smalltalk environments, frameworks and culture.


This isn't particularly relevant. Java has been very conservative with language changes since at least 8 (2014) but I'd argue 5 (2004) so for the past decade or 2 it has been a conservative language.


I wouldn't want to work in an ever evolving always experimental language. Languages that keep changing are annoying and makes your code break. That's time and effort spent not building stuff.


There are companies that use Java and will keep using Java no matter what. A 10+ years old Java is fine for them.

An example: all the incumbent financial sector.


Yes, but there's little growth to be had there. I think the Java guys don't want to be the new COBOL, they'd like new apps to be written in Java too.


Java and, to a lesser degree, C#, seem to have joined COBOL in the list of “forever languages”. Unless financial institutions trust LLMs to rewrite them in newer languages (and finds that cost-effective), there is no way that humongous corpus of business-critical software will ever be ported.


Why not? COBOL has great job security.


I'm told COBOL itself doesn't actually. It's not hard to learn and lots of cheap workers in India will do it for you. The job security is knowing the mainframe tech stack in general, and the specific ways it's used in specific institutions that don't have good internal docs.


And keeps being updated to modern tooling and paradigms.

https://www.microfocus.com/en-us/products/visual-cobol/overv...

COBOL 2023 - https://www.iso.org/standard/74527.html


Late? For what? Lightweight threads weren’t a reason to move to Node or Go. The Loom fibers are very special in that the blocking syntax doesn’t change, there’s no apparent coloring, and you can easily adapt your blocking calls to the lightweight dispatched non-blocking style now. Asynchrony is different from fibers. The .NET team has been pondering introducing fibers when it became obvious that Loom gained significant traction. I don’t think there’s any “lost ground” but simply a healthy competition. It’s not all about the merits of a particular language I think but more about the people behind the ecosystems, the compute platforms and the job market.

As for phasing out reactive streams and Graal, I don’t see why one would ever tread that path.

Why do you think that “isomorphism” is important? Besides, Kotlin Multiplatform, ScalaJS, Vaadin all give you that option.

As for your third paragraph, I think it’s again more of a cultural and job market issue than a purely merits-based technological one. The Node and Go ecosystems gained enough traction to pull in many production apps. The agility story is different with them too. Experienced engineers are very reluctant to changing languages, relearning and rewriting their production apps. Management might prefer to follow whatever they perceive as the present trend but whether they can do their due diligence and justify such a decision financially is a different story. Juniors might be curious enough to rewrite their apps on a different stack. But established software follows different standards. And for the majority of Java software you don’t need the most recent features, it works just fine. Since Java has always been so huge, there’s been a lot of badly designed software (in absolute figures), whose maintainers might be tempted to “rewrite” it in a different stack. But it’s rarely a sane decision. With the new feature set one can now simply opt into certain enhancements, but it’s not an ultimate solution to questionable architectural and design decisions either.

But what do you mean by “Node-style efficiency”? It’s not very performant and is quite a resource hungry platform. I know a couple companies that decided to move from a legacy Spring stack to a modern Node stack. Their argument was like “Java is old and boring, we want to rejuvenate our software by using TypeScript, and one of our principals decided that’s the right call.” Not kidding. In reality though they over-hired JS devs for their frontends and now believe they can profit from that “isomorphy”.

There’s so much expertise behind Java and its ecosystem, most problems solved long ago, something all other platforms can only dream of.


I think we're in agreement :-) I'm talking about how those languages/platforms pitched themselves to get users. In the early days, Node pushed itself as about performance. Write in a high level scripting language like Python or Ruby that you already know, but with a really advanced VM behind you, and with fully async everything so you can handle a bazillion requests simultaneously. That was a winning strategy for them.

Isomorphism was then the other selling point of Node as other platforms caught up. Have one team that can write frontend and backend code, easily share code, easily hire. Yes nowadays you can do that with Kotlin but that's a very new capability (and note: not Java, not Loom). As you say yourself, real companies factor this into their decision making.

Loom solves the performance argument and without losing usability, which is a significant win. But it doesn't help with the other factors and companies like the ones you mention already made their decisions, so, this is what I mean by "late". Those companies probably aren't going to migrate from TypeScript back to Spring are they.


Oh, I wasn’t aware Node had originally pitched performance. Was it only in the context of single-threaded IO?

Yes, from the hiring perspective that “isomorphism” makes a lot of sense on paper.

Pardon me, in your third paragraph, what “other factors” are you referring to?

As for the companies that made such a move, well, it’s their choice. There’s always a reason, it boils down in the end just to whether it was well justified at the moment. Time will tell.

A reasonable argument in my decision making was Spring’s huge memory footprint in a microservices architecture deployed to public cloud. Go’s profile was nigh of 100m peak RSS for a given payload, while Spring’s was 1.5g, per instance. I could just take one commodity nano VM at $5 a month instead of multiple 1g-4g VMs at $50-150 a month each. FinOps. It’s a common theme, suppose you have a service mesh with many tiny services, it does sound right to have them utilize just as few resources and be spawnable instantaneously on-demand. So a mixture of Java and Go services comes up quite frequently.

But then you need to factor in a lot of other costs and the fact that Go’s GC will hiccup anyway and potential goroutine leaks could bring down the entire cluster. And so on. That’s just to say that—on paper—it might look assertive, while in reality you would get to realize that there’s so much opaque minute detail behind the Spring or Quarkus APIs that once you realize that it’s missing in your Go or Node code base and get to reintroduce it (wasting hours), you end up with something that need not perform just as well. The more stuff you plug into your Go or Node lightweight frameworks, the more integrational complexity is on you and the less agile you get. It’s all their choice.

There are ways to minimize JVM resource utilization. And you don’t need to always use Spring, and you can have instantaneous start-up times with persisted app state. To me, it’s getting harder and harder to justify Go or Node on the backend.


> Go’s profile was nigh of 100m peak RSS for a given payload, while Spring’s was 1.5g, per instance.

It's not clear from the context if Java actually needed 1.5G to run, or if it simply just saw there was available memory on the instance and made use of it?

In my experience Java tends to make use of a larger portion of memory than is actually required. This is a good thing in my opinion, as it reduces the work of the GC. You could try running it on smaller instances to see how it performs, if you have not done so already.


No, it was real RSS resource utilization. Basically due to many extra beans being loaded by Spring and extra allocations due to the typical abstractions. If you run it on a smaller instance, your Java process goes out of memory and will have to be restarted. I usually profile all projects thoroughly. You can’t really run a Spring app on less than 512m virtual memory. You can run Java on a few megs, the overhead is negligible over compiled binaries. You can run a lightweight Netty based framework (anything tapir supports essentially) on just 64m heaps. But not Spring. Spring does substantially more work behind the scenes.


Aka green threads.


JDK 1.1 Green threads were quite different than the new Loom fibers.


The current specifications explicitly avoid calling Virtual Threads fibers exactly to avoid confusion with older implementations of the idea.


Basically you ignore a bunch of the work done for async, callbaks, etc and go back to making a new "thread" and handling a request.


or you watch and learn, understand the strengths and weaknesses, and then make an informed decision. async/await is no panacea. Quoting the Loom overview[0]:

"Project Loom's mission is to make it easier to write, debug, profile and maintain concurrent applications meeting today's requirements. Threads, provided by Java from its first day, are a natural and convenient concurrency construct (putting aside the separate question of communication among threads) which is being supplanted by less convenient abstractions because their current implementation as OS kernel threads is insufficient for meeting modern demands, and wasteful in computing resources that are particularly valuable in the cloud. Project Loom will introduce fibers as lightweight, efficient threads managed by the Java Virtual Machine, that let developers use the same simple abstraction but with better performance and lower footprint. We want to make concurrency simple(r) again! A fiber is made of two components — a continuation and a scheduler. As Java already has an excellent scheduler in the form of ForkJoinPool, fibers will be implemented by adding continuations to the JVM.

-- [0]: https://cr.openjdk.org/~rpressler/loom/Loom-Proposal.html


> putting aside the separate question of communication among threads

But that's.. 99% of the problem. If you haven't solved that then you haven't solved anything. Solving that is the point of Future systems, the m:n scheduling is just a nice cherry on top.


99% of which problem, though? IMO the whole point of Loom, and what I'm looking forward to, is code that looks like this:

fooer.onCompletion(() -> System.out.println("yay")); fooer.onError(e -> System.out.println("o no")); fooer.foo(); // foo is some long operation

can once again simply be:

try { foo() S.o.p("yay"); } catch(Exception e) { S.o.p("o no"); }

just like any college kid would've written 20 years ago in their CS 101 class in the chapter on exceptions.

Many multi-threaded programs today don't even have a "communication among threads" problem. - they avoid it by simply not communicating between threads. They precompute work splits and then farm it out, and then gather up all the results. Consider HTTP servers or RPC handlers. A request is farmed out to them, they do a bunch of IO stuff to answer it, and then release their thread back to a pool. That's a huge use-case that gets materially improved right there. Look at what goroutines did for go programming.


These features become orthogonal when the right primitives are there, and so there's no need for a "system" (other than the language/platform itself). Some of the communication problem is already addressed by other Loom features (https://openjdk.org/jeps/453), and others will be addressed by future ones (like channels).


BlockingQueue is a kind of channel already, what are you thinking to add?

And to the main point, of course, Java has plenty of thread comrunioation possibilities, Loom just makes threads cheaper.


> BlockingQueue is a kind of channel already, what are you thinking to add?

The main difference between a channel and a BlockingQueue is that a channel can be closed, i.e. it allows one or both sides to signal the other that they're done producing/consuming.

> And to the main point, of course, Java has plenty of thread comrunioation possibilities, Loom just makes threads cheaper.

Definitely.


It’s a different paradigm than asynchrony and reactive streams. It’s not like you could just ignore anything. And with thread pinning you still should control what you’re doing.


I got off the java train at 8. 21 seems like a good place to get back on.


There’s been a lot of good improvements since 8 for sure. Nothing individually mind blowing, but together they make the language a lot more agile and enjoyable to work with:

- Records (immutable objects, basically named tuples)

- Switch expressions

- Pattern matching and record patterns (coming in Java 21)

- Text blocks (multiline strings)

- “var” for variable creation instead of the full type name

- Virtual threads and other Project Loom enhancements

- Improved tooling, including jshell, a REPL built into the JDK


Ignoring bad technologies is great thing. Nothing wrong with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: