Hacker News new | past | comments | ask | show | jobs | submit login
Project Loom and Structured Concurrency (javaadvent.com)
171 points by ingve on Dec 4, 2020 | hide | past | favorite | 108 comments



`Async`/`await` or something like Kotlin's `suspend` are great language features for certain domains in which a developer needs to manage blocking system calls: in lower-level languages such as Rust or C, you probably don't want to pay for a lightweight "task runtime" Like Go's or Erlang's. They bring not only a scheduling overhead but also FFI complications.

However, for application languages that can afford a few extra nicities like garbage collection, I fail to understand why the stackless coroutine model (`suspend` in Kotlin) or `async`/`await` continue to be the developer's choice. Why do languages like Kotlin adopt these features, specifically?

Manually deciding where to yield in order to avoid blocking a kernel thread seems outside of the domain of problems that those using a _higher level_ language want to solve, surely?

The caller should decide whether to do something "in the background". And this applies to non-IO capabilities too, as sometimes pure computations are also expensive enough to warrant not blocking the current task.

Go and Erlang seem to have nailed this, so I'm glad Java is following in their footsteps rather than the more questionnable strategy of C# and Kotlin. (Lua's coroutines and Scheme's `call-with-current-continuation` deserve an honourable mention too.)


Kotlin runs on JVM so if JVM does not support something natively Kotlin can't have that feature like task runtime.


Additionally Kotlin also targets native. `suspend` in Kotlin was very much designed with this in mind as it's easier to implement than something that requires an extensive runtime like Loom.

Kotlin will still support Loom on JVM and there will likely be integration with suspend/flows etc also.


That's a good point. Generally, I opt for languages that either compile away their nicities to avoid runtime hits, such as Rust being compiled to WASM, or languages that bring nicities in runtimes that they _completely own_, such as Java on the JVM.

The problem with Kotlin and the like is that they can't easily compile away their features due to inherent runtime dependencies, e.g. garbage collection, making them poorly suited to environments with a very minimal runtime like WASM, while also being at the mercy of the host language creating runtime abstractions that have mismatches with their own language's features.

Although it'd be unfair for me to say the JVM is designed only for Java; invokedynamic and non-reified generics both assist JVM targetting for non-Java languages such as Clojure.


Clojure doesn't actually use invokedynamic from what I know.


Here's the largish summary about the discussions from 9 years ago, that found the applications limited and tradeoffs painful: http://blog.fogus.me/2011/10/14/why-clojure-doesnt-need-invo...


I have a lot of experience using concurrency in Go, and for the last couple years have been at the bleeding edge of Python async. The tradeoffs between the two approaches are immense.

With the virtual thread model you have:

* No function coloring problem. This also means existing code is easier to port.

* possibility of transparent M:N scheduling.

* Impedence mismatch with OS primitives.

* Much more sophisticated runtime.

* Problematic task cancellation.

* Lots of care still needed for non-trivial inter-task synchronization.

With the async API model you have:

* Viral asyncification (the method color problem).

* Simpler runtime.

* Obvious and safe task cancellation.

* Completely orthogonal to parallelism (actually doing more than one thing simultaneously) for good and for bad.

* Inter-task coordination is straightforward and low-overhead even for sophisticated use cases.

* Higher initial learning curve.

I'm leaning toward liking the async approach more, but that might be just because I'm deep in the middle of it. I think the biggest argument in favor of virtual threads is the automatic parallelism; that's also the biggest argument against: free running threads require more expensive synchronization and introduce nondeterminism.


* Java offers both user-mode and kernel threads. You pick at creation, and can even plug your own scheduler.

* Loom's virtual threads are completely scheduled in library code, written in Java.

* FFI that bypasses the JDK and interacts with native code that does either IO or OS-thread synchronization is extremely rare in Java.

* Cancellation is the same for both.

Also, IMO, coordination is simpler for threads than for async. Where they differ is in their choice of defaults: thread allow scheduling points anywhere except where explicitly excluded; async/await allows scheduling points nowhere except where explicitly allowed. Putting aside that some languages have both, resulting in few if any guarantees, threads' defaults are better for correct concurrency. The reason is that correctness relies on atomicity, or lack of scheduling points in critical sections. When you explicitly exclude them, none of your callees can break your correctness. When you explicitly allow them, any callee can become async and break its caller's logic. True, the type system will show you where the relevant callsites are, but it will not show you whether there is a reliance on atomicity or not.

Async/await does, however, make sense for JavaScript, where all existing code already has an implicit assumption of atomicity, so breaking it would have broken the world. For languages that have both, async/await mostly adds a lot of complexity, although sometimes it is needed for implementation reasons.


One headache I see that no one seems to mention is thread affinity seems a lot harder to manage in an implicit system. Many patterns use a single thread for synchronization but often times one thread is special. UI systems often have a UI thread that controls the GLContext or what have you. In something like C#'s async, you can easily schedule tasks off and on that thread. I'm not sure how you could do this implicitly. Loom seems to keep around native Threads for this sort of thing?

I would be really interested in seeing a UI system written with Loom.

>FFI that bypasses the JDK and interacts with native code that does either IO or OS-thread synchronization is extremely rare in Java.

I would also argue its rare because its hard to do. This is a self fulfilling argument. In C#, a very similar language, its much more common to do that sort of thing because its easier. C# is chosen for games and other apps because it can interop with native more easily.


> I'm not sure how you could do this implicitly.

See https://news.ycombinator.com/item?id=25301246

> I would also argue its rare because its hard to do.

I would argue it's mostly because the Java ecosystem is so rich (often richer than "native" alternatives), there's hardly ever a need to do it. BTW, as of JDK 16, you'd be able to do FFI in pure Java: https://openjdk.java.net/jeps/389 It's not yet entirely convenient -- that would come later.


>>FFI that bypasses the JDK and interacts with native code that does either IO or OS-thread synchronization is extremely rare in Java.

>I would also argue its rare because its hard to do.

Is it? I recently had to write some simple code with C to interact with WinAPI and JNI was pretty straightforward to do.


Is it possible to override the scheduler from user code? i.e. if you wanted to control scheduling yourself for some reason.


Yes, and even on a per-thread basis.


Thanks! Is there an example "Hello World" executor that demonstrates the minimum functionality needed?

I would like an executor that runs all tasks on a single thread and schedules them deterministically for testing.


    var myThreadFactory = Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory();
All threads created by this thread factory will be scheduled to the same kernel thread.


Does this example fully answer the question? Does this allow the tasks to be scheduled deterministically? If I'm very careful and write the tasks such that they make blocking calls at specific locations, then yes. Otherwise, is there any feedback to inform me that tasks are switching context at places I didn't expect? Is it possible to define a custom scheduler that can simply print debug messages at every context switch?


It answers the question with a "hello, world." To do more sophisticated stuff, like what you want, you'll need to replace or wrap the standard single-thread Executor with your own Executor. There is exactly one method you need to implement. For example:

    Executor ste = Executors.newSingleThreadExecutor();
    Executor myExecutor = task -> {
      if (task instanceof Thread.VirtualThreadTask vtt) System.out.println("Scheduling " + vtt.thread() + " on " + Thread.currentThread());
      ste.execute(task);
      if (task instanceof Thread.VirtualThreadTask vtt) System.out.println("Descheduled " + vtt.thread() + " from " + Thread.currentThread());
    }
    var myThreadFactory = Thread.builder().virtual(myExecutor).factory();


I don't think "task cancellation" is quite the major difference you think. If you model it as thread A wants to cancel thread B, then while threading means that A runs and cancels B, but B may need some time to catch up, the async world has the problem of thread A running at all to cancel B, if B is having a problem that requires cancellation. It's "obvious" and "safe" until it doesn't happen at all.

This is a pervasive problem with the async/await model. As it scales up the probability of something failing to yield when it should and blocking everything else continually goes up as the code size increases, and then the whole model, correctness, practicality, and all, just goes out the window. While it is small for small programs, and it the scaling factor often isn't that large, it is still a thing that happens. Entire OSes used to work that way, with the OS and user processes cooperatively yielding, and what killed that model is this problem.

Also, I'm writing a lot of code lately where I can peg multiple cores at a time, with a relatively (if not maximally efficient) language like Go; having to also write it as a whole bunch of OS processes separately running because my runtime can only run on one core at a time is a non-starter, and "async/await" basically turns into a threading system if you try to run it on multiple cores in one system anyhow.

These two fatal-for-me flaws mean it's a non-starter for a lot of the work I'm doing anyhow, regardless of any other putative advantages.

(As I mentioned, I'm using Go, but if you want to see a runtime that really has the asynchronous exceptions thing figured out, go look at Erlang. Having a thread run off into never-never-land and eating a full CPU isn't fun, but being able to log in to your running system, figure out which it is using a REPL, kill just that thread, and reload its code before restarting it to fix the problem, all without taking down the rest of your system is not an experience most of you have had. But it can be done!)


Async, and cooperative multitasking in general, requires all members participate in the contract: you shall not block and you shall not go too long until yielding. Once a piece of code violates that, all bets are off. Python has explicit debugging mechanisms to help a developer detect the latter.

Cancel safety is less about killing an out-of-control task, and more about making sure the state after cancelling a task is consistent.


> you shall not block and you shall not go too long until yielding

Arn't those effectively the same?

I think the only contract is "you shall not go too long until yielding". Or if you're designing for performance, not for responsiveness, it might be: "you shall not hold unto CPU and IO resources that you're not using".


>you shall not block and you shall not go too long until yielding

Eh... you as the caller have control on how a task is run. If you want it on a different thread and non-blocking, or simply deferred, you can. You have a lot of control.

With an implicit system you have much less control. Loom seems to solve this by letting you explicitly schedule tasks anyhow?


> but if you want to see a runtime that really has the asynchronous exceptions thing figured out, go look at Erlang.

The erlang VM does indeed have async exceptions, and resource management figured out. Usually you can just kill an erlang process and you don't have to clean up after it's open sockets, file descriptors, etc.

It's also possible to hook the C FFI system to take full advantage of that: https://youtu.be/l848TOmI6LI

(Disclaimer: self promotion)


Cancellation is a little tricky. If things can cancel at any point, then it's impossible to write safe/correct code. Async has the advantage that the await calls are natural sync points, so it's (probably) safe to cancel there. But historical experience has also shown that people will forget to yield when they should, and it will lead to cooperation problems.

I think the best approach has to look something like Go's, but perhaps a bit more structured (dynamic scoping[1] might help perhaps with task nurseries[2]). Unless you're writing extremely low level code, you want your language runtime to intercept all syscalls and figure out the async story for you. The language should handle making sure that the M:N mapping works out, no one opens a socket the wrong way etc. Then for you as the program writer, your responsibility is just setting explicit cancellation points as part of the general error handling approach. It's still not perfect, but I think that would be the next evolution from what exists today.

[1] https://blog.merovius.de/2017/08/14/why-context-value-matter...

[2] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...


If Loom isn't adding cancellation tokens to the core library then its going to be a major difference. That said, the Java way was already for runnables to cancel themselves with some kind of volatile cancel bool so I expect that's all we'll get.

From the article: >Clearly the intent is also to cancel the child thread. And this is exactly what happens.

I find this to be a bold choice. In C# you can detach children and such. Should be interesting to see if this gets added later.


I've had similar experiences, but I don't much care for async Python. In particular, it's way too easy to block the event loop either by accidentally calling some function that, perhaps transitively, does blocking I/O (this could be remedied if there was no sync IO) or simply by calling a function which is unexpectedly CPU-bound. And when this happens, other requests start failing unrelated to the request that is causing the problem, so you go on this wild goose chase to debug. Sync I/O is also a much nicer, more ergonomic interface than async IMO. And then there are the type error problems--it's way too easy to forget to `await` something. Mypy could help with this, but it's still very, very immature. Lastly, last I checked the debugger couldn't cope with async syntax--this is obviously not criticizing the async approach in general, but I wanted to round out my complaining about async Python.

I don't mind working with goroutines personally--I use them sparingly, only when I really need concurrency or parallelism. This takes some discipline (e.g., not to go crazy with goroutines and/or channels) and a bit of experience (in the presence of multiple goroutines, what needs to be locked, when to use channels, etc), so if you're relatively new and very impatient or undisciplined you probably won't have a good time (which isn't to say that if you dislike goroutines you must be a novice or undisciplined!). But for me it's nearly an ideal experience.


I'm not sure I really think of function coloring as a "problem" ...

facebook is experimenting with auto differentiation for Kotlin and looks like it's adding a new "differentiable" function color -- https://ai.facebook.com/blog/paving-the-way-for-software-20-...

It looks very easy to reason about and use to me ... and i personally find async a similarly useful marker ... It's about being able to push constraints from caller arbitrarily far down the callee stack -- which is really not something that types support at all but provides for a very high confidence variety of constraint -- and high confidence constraints seem to me like they convey a ton of information.

I've been wondering actually whether "function colors" might actually just be a good way to create a whole variety of strong statically enforceable constraints for functions. It seems like they lead to very good and simple programmer mental models ...

Are there languages that offer "user definable" function colors? I can think of a lot of application domains that would be much better served by these kinds of constraints than oo or other type-centric approaches ... it would be ridiculously useful to be able to mark a function with the "MyDomainBusinessLogic" color and get assurances that such a method can only call other functions annotated with that color ... would provide an easy way to iterate on app specific abstractions provide compiler assistance for the communication of layering intent -- rather than a bunch of poorly specified words in documents that try to communicate layering intent to other developers -- in language that is either sufficiently precise as to be incomprehensible -- or sufficiently vague as to be subject to (mis)interpretation ...


There was a series of essays about esoteric language features posted here a few weeks ago, and one of them was exactly what you're talking about. It was a functional language, with the ability to mark functions as involving I/O, non-terminating, or (I think) arbitrary custom "colors."

It was an academic language, but very interesting. Sadly, I don't remember the name. Maybe somebody else can post the link.



Wow neat. This has to be one of the most promising new language I've seen in a while.


That’s the one! Thanks.


A couple of excellent articles on the different ways of thinking about the choice (of when to yield) vs. color (virality of choice) problem:

* Choice is bad: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... * Choice is good: https://glyph.twistedmatrix.com/2014/02/unyielding.html


> It does nothing for you if you have computationally intensive tasks and want to keep all processor cores busy.

I would argue this isn't concurrency at all (the job of juggling mostly independent tasks, and scheduling them to a relatively small number of processing units), but parallelism (the job of performing a single computational task faster by employing multiple processing units), and exactly the job of parallel streams.

> It doesn’t help you with user interfaces that use a single event thread.

It might. Loom allows you to plug in your own scheduler, and it is a one-liner to schedule virtual threads on the UI thread:

    var uiVirtualThreadFactory = Thread.builder().virtual(java.awt.EventQueue::invokeLater).factory();
All threads created by this factory will be virtual threads that are always "carried" by the UI OS thread. This won't currently work because each of those threads will have its own identity, and various tests in the UI code check that the current thread is actually the UI thread and not any thread that is mapped to the same OS thread. Changing these tests is something the UI team is looking into.


>All threads created by this factory will be virtual threads that are always "carried" by the UI OS thread

Does this actually solve the problem? I don't see it. We want to interweave foreground and background work. Sometimes that means blocking work will yield, sometimes that means it should not yield because conceptually several tasks should retain exclusive control of that thread. You might want some IO task on the background but you need a block of OpenGL tasks to retain control.

I just don't see how you can do this implicitly in a way that's cleaner than async/await. It seems like posting tasks to this thread factory or that will get the job done but is that an improvement?

It sounds like for now this stuff will still be using the current model of posting unyielding runnables to a thread. That's fine I guess. Loom still seems very cool, it just doesn't cover the cases I deal with a lot more often.


It does (or will do, once the checks in the UI code are fixed) exactly what you want it to do. All computation will be done on the UI OS thread. All blocking operations will release it to do other things so it remains responsive. If you want to compute something outside it -- just spawn another thread that's not mapped to it.


I feel like I must not be communicating the issue properly.

Sometimes, like with with several GL calls or layout operations, you want the UI to be blocked because that is the synchronization model openGL requires. Single thread ordering is not enough. We need specific exclusive scheduling of critical sections and I don't see how this system can understand that without just as much or more work as async/await styles.

Perhaps we will get some new way to render UI out of this. I'm excited to see what the UI team comes up with once it all works. However, my pessimistic assumption is that we will simply stick with native non-premptive threading.


> I don't see how this system can understand that without just as much or more work as async/await styles

I didn't say it was less work; probably just as much. The benefits over async/await are elsewhere: in the tooling support and in the lack of split APIs/programming models.


For me the real advantage is not on performance but on the programming model. I have been tinkering with Loom (and clojure) and the idea of "just" calling some library without worrying about blocking is refreshing. That means that for the most of it, you can write your code without worrying too much about some kind of callbacks or async support from your library and it just works.

Of course, for those with extreme performance requirements, they will probably have their own custom scheduler and concurrency/parallelism mechanisms but for the vast majority of jvm users out there I think Loom will be a great thing. If Loom integrates with GraalVM/native-image it would be even nicer.


I think the vast majority of JVM users won't even need Loom. OS threads perform well enough for most use cases. You can go a very long way with just a ThreadPoolExecutor.


No one “needs” Loom; you can always write in callback oriented style. The point is it will free you from that.


For those that care about numbers: Loom targets ~ 200b memory overhead per virtual thread. And ~ 100ns per context switch between virtual threads.


The article seems to assume you know what Project Loom is. (Not to be confused with Google's Project Loon, the balloon thing.)

From https://wiki.openjdk.java.net/display/loom/Main, it's an OpenJDK project:

> Project Loom is to intended to explore, incubate and deliver Java VM features and APIs built on top of them for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform.

A bit more history/explanation here:

http://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part1.h...


To summarize: up till now, Java Threads have been 1:1 with OS threads. They’re limited to a few thousand per JVM. This project moves to an M:N threading model but retains the Thread API. It allows for millions of threads per JVM and async/await style performance of the existing synchronous Java libraries without language changes and with minimal changes to the standard library.


Originally (about 20 years ago) java threads were M:N I think. How is this different? If I had to guess, they are not opaque to the VM which has more freedom to optimize them.


This is mentioned in the article: the old green threads model was not blocking aware - if a green thread scheduled on an OS thread issued a blocking call (say a listen on a socket) than the whole OS thread was blocked.

With Project Loom, if a virtual thread executes a blocking call, the virtual thread is suspended and the OS thread is free to execute another virtual thread.


So how does it actually execute the system call under the hood? Using some kind of background OS thread pool?


From what I can tell, async system calls are used whenever possible. A blocking call on a socket doesn't make a blocking system call, thus permitting the carrier OS thread to go do something else. As for file I/O, things are a bit messy. Older linux kernels don't support "true" async file I/O, and the Java NIO async file channels do use thread pools to emulate async behavior in that case.


Green threads was only available on Solaris. They did not support more than 1 cpu. They were not particular particular performant either. And on top of that had issues with calling native code.


There was a before time when they weren't 1:1


Thank you, I totally clicked on this thinking how could Google Loon have anything to do with this



timley

Episode 8 “Project Loom” with Ron Pressler

https://inside.java/2020/11/24/podcast-008/


I don't quite get the point of the executor service with virtual threads. If they are really cheap to create then why not just create them as required? It's been a while since I programmed in Java though, am I missing something? Edit: Ah - I read the rest of the article. Using it as a synchronisation primitive makes sense I guess, if a bit clunky.


In the examples with structured concurrency. The point of using an executor service is not to reuse the threads. But instead to control their termination. If you read Nathaniel J. Smith's primer [1] on structured concurrency. The ExecutorService in the examples act as the nursery. Loom is just being "lazy" and reusing ExecutorService for something that it wasn't originally intended to do. Earlier versions had a specific class called FiberScope [2]. Whether or not we will see more specialised classes for this in the future I don't know.

[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

[2] https://www.javaadvent.com/2019/12/project-loom.html


It is an unfortunate abuse of an existing facility for something quite different.


The API is not yet final, but I wonder why you think it's "an abuse of an existing facility for something quite different," and whether this could just be a matter of a habit rather than purpose.

Take a look at the description of ExecutorService (https://docs.oracle.com/en/java/javase/15/docs/api/java.base...) and Executors (https://docs.oracle.com/en/java/javase/15/docs/api/java.base...). I think we're using them exactly as intended: those are APIs meant to manage and control task's execution and lifetime. I'm interested to hear why this feels wrong to you.


I think the common mental model of an ExecutorService for most people is something static and heavyweight and whose main task is to avoid expensive thread creation. When used with SC they are suddenly created and thrown away without a blink. And are mostly tasked with coordination. So people will probably need some time to adjust.


Some adjustment to the fact that threads are not necessarily costly resources will be required, but would putting the newThreadExecutor (and the specialised newVirtualThreadExecutor) method in a class other than Executors, where it is one among many other methods, and perhaps have it return a subclass/interface of ExecutorService make the adjustment easier? This is something we're seriously considering (especially as that particular ExecutorService might have other implications, such as how thread dumps are produced).


I appreciate the cleverness of just adding AutoClosable to ExecutorService. But I definitely think returning a specific subclass/interface of ExecutorService would be a great help. I think at the expense of an extra class, something like this is a lot easier for people to wrap their head around:

  try (Nursery n = Nursery.spawn()) {
    n.submit(() -> foo());
    n.submit(() -> bar());
  }

  try (Nursery n = Nursery.spawnWithDeadline(Instant)) {
    ...
    try (Nursery nx = n.spawn()) {
      nx.submit(() -> foo());
      nx.submit(() -> bar());
    }
    ...
  }
...


If I understand correctly this is to Java what Gevent/Eventlet are to Python. Asynchronous execution without having to write asynchronous code.

I've been doing this in Python for a long time and I love it, even though it can lead to some hard-to-debug issues (due to monkeypatching).


I am still not comfortable enough with this concept to answer this question myself, but will this, by default, lead to speed ups and/or reduced resource consumption in a) application server like tomcat and b) web frameworks like Spring? Assuming it's implemented...


It depends a lot on how the service handles requests. If it takes the one thread per request model, and those requests are mostly bound by blocking calls like IO, then replacing those OS level threads with virtual threads will almost certainly see a reduction in resources (as virtual threads are smaller) and potentially more consistent response times (because scheduling the correct thread is easier at the JVM level).

However if your service has been written in an async style, or you are mostly CPU bound, then you aren't likely to see a change.

Our hope is that by making simple blocking code perform better you won't have to spend your time converting code to an async style to scale your services.


Probably no, but that's not the point. These server frameworks have complex code that helps balance load across threads.

Loom may make such programs simpler to write, but will not automatically give a boost to already optimized code.


Programming challenge in golang: Create a persistent tcp client that can connect to a server, read responses… and be disconnected via context.WithCancel().


I'm not familiar enough with Go to understand why this is a challenge. What point are you trying to illustrate with the challenge? Is this easy or hard, and how does this compare to Loom and Structured Concurrency?


Many people in this thread are talking about structured concurrency in other languages. I brought up a fun one for golang that, as far as I can tell, is a 3 year old open issue [0] that I've recently bumped into.

Maybe this is a fun challenge for you to become more familiar with golang?

[0] https://github.com/golang/go/issues/20280


Sounds like a useful building block for a better java-based actor library.


async/await, Project Loom fiber, Warehouse/Workshop Model

https://github.com/linpengcheng/PurefunctionPipelineDataflow...


Since the article goes out of its way to not mention Kotlin, I'll do it for them since this is both lame and more than a bit disingenuous. Arguably, Kotlin co-routines (and the Flow API) provides a very nice implementation of the exact same concepts on the JVM. As far as I know, the loom integration is already planned and probably implemented to a large degree. Mostly doing that should be straightforward as this pretty much just maps 1 to 1 to things like suspend functions, co-routine scopes, etc.

That is a different way of saying that Oracle is doing the right things with Loom. Although bolting this onto the Thread API without cleaning that up is probably an open invitation for hordes of people to do the wrong things. That API already provides plenty of well documented ways to take shots at your feet. IMHO it's a mistake to pretend it's all the same.

The main difference with Kotlin co-routines is that the Kotlin implementation is multiplatform and also has implementations that work on IOS (native), in a browser etc. Additionally, you get to depend on nice language features like internal DSL support, the suspend keyword, etc, that make writing code a lot less tedious and error prone. But it's the same kind of code with the same kind of concepts. Finally, it also has lots of higher level primitives. Flow is a recent addition that allows for building properly reactive applications that sits on top of this.

So, to answer the obvious question will this replace/deprecate co-routines: no, this will have little to no impact as it will be trivial to support the low level primitives Loom provides just like they already work seamlessly with other implementations like rxjava, spring reactor, javascript promises, etc. They'll support it because it probably provides some performance benefits to use Loom if it's available on the platform but it should not impact how you use co-routines. The same co-routine code you write today will just work on top of Loom once that is available and implemented.


You're getting downvoted because of your snarky opening statement.

But I do think it's important/relevant to compare virtual threads to Kotlin coroutines.

I agree with your point that tacking all of this onto the existing (flawed) Thread API is a risky move. I understand the reasoning on both sides, but I'm not usually a huge "backwards compatible at all costs" or "don't make people learn new things" proponent on anything. So that's my bias.

I think you're painting the `suspend` keyword a bit rosy, though. The fact that Kotlin has colored functions is a huge pain in the ass. You have to design different APIs sometimes to account for a "suspend version" and a "non-suspend version".

The idea with Loom (like goroutines, which is the first green thread model I've used) is that async stuff is so cheap that you can almost pretend it doesn't even matter if something calls a coroutine. I'm not sure if that's the best solution, though. One advantage that colored functions do have is that you see it and "know" that the thing involves expensive and/or blocking work. With coroutines, how do you know if calling a function will slow your current thread down as it waits for the results? That's a question we could ask Go devs today, I suppose.

I agree with your prediction that Kotlin's coroutines might just sit on top of virtual threads on the JVM in the future.


> I agree with your point that tacking all of this onto the existing (flawed) Thread API is a risky move.

This is not what Loom does, though. Virtual threads are not using the thread API. They are (Java) threads; no more and no less than today's threads. Just as people don't normally use the java.lang.Thread API directly to use today's threads, there's no reason why they should use it with virtual threads.

> One advantage that colored functions do have is that you see it and "know" that the thing involves expensive and/or blocking work.

It gives you the illusion of knowing something that you don't really. The OS, or the Java runtime, can and do pause your thread at any point for durations that are between orders of magnitude shorter and orders of magnitude longer than some blocking operations. There is no useful semantic knowledge you can extract from knowing something is "blocking" to the point that it's a meaningless designation. It does mean something in "single-threaded" languages like JavaScript, or when programming hard realtime software, but not in ordinary Java. You're wasting a syntactic "colour" on zero bits of information.


> This is not what Loom does, though. Virtual threads are not using the thread API. They are (Java) threads; no more and no less than today's threads. Just as people don't normally use the java.lang.Thread API directly to use today's threads, there's no reason why they should use it with virtual threads.

Right. That's fair. They are threads, but it's just that now you have two "kinds" of Thread, where before you had one (AFIAK). I like the analogy to "virtual memory" of an OS, but can applications ask for "real RAM" instead of "virtual RAM"? I don't think so, but I could be wrong. So having virtual threads and "raw" threads under the same class has pros and cons, IMO.

> It gives you the illusion of knowing something that you don't really. The OS, or the Java runtime, can and do pause your thread at any point for durations that are between orders of magnitude shorter and orders of magnitude longer than blocking operations. There is no useful semantic knowledge you can extract from knowing something is "blocking" to the point that it's a meaningless designation. It does mean something in "single threaded" languages like JavaScript, but not in Java.

That's true. But it's still a signal from the author of the function, the same way a type is. If a function makes a network call, you can be fairly sure that it will usually take longer to return than a function that is just flipping some bits around (even a fairly large amount of bits). Or do you still disagree? Would you say that if I'm writing JVM code, I really shouldn't worry about what functions make network and DB calls or what thread they're on because it really doesn't matter? Even if I have a UI that I'd like to keep responsive?


> So having virtual threads and "raw" threads under the same class has pros and cons, IMO.

That's one way to think about it. Another is that Java never gives you "OS threads" it always gives you Java threads, an abstraction with multiple possible implementations. One implementation is no more real or raw than the other (in fact, you could even theoretically make virtual threads the carriers for other virtual threads -- a thread is a thread, after all -- but we explicitly blocked that because it's not useful). There is no real difference between that and ArrayList and LinkedList both implementing the same List interface. They're both just as real, but they have different footprint and performance (the class hierarchy for threads is slightly different, but for uninteresting technical reasons).

> Or do you still disagree?

I still disagree. That network call might take 1ms, and that bit fiddling might trigger a GC collection that takes 10 times that or more. Moreover, neither Kotlin nor C# mark long subroutines with a different colour, and they don't even mark blocking calls with a different colour, just the flavour of them that's to be used with coroutines. The real reason that colour is necessary is because of the way the feature is implemented.

Originally, that colour meant to signify something else: nondeterminism in mostly-deterministic languages like Haskell, and it is also important in JavaScript. Trying to retroactively find a useful meaning for it in Java is an excuse.


Fair point about the abstraction level of a Java thread vs an OS thread.

In light of both things you just wrote, let me ask you this: why does Java give us any choice on thread implementation? Why not have everything be a green thread from now on?

If Java threads are not OS threads, can be paused for any amount of time, and there's no reason that network calls or db calls or file IO should be treated any differently, then I'm not sure why the old Java Threads shouldn't be deprecated.

Can you shine some light on that? After Loom drops, when would I ever want something other than a green thread?


The first part of the answer is the same as that for a similar question we've been asked about LinkedList: we don't deprecate things that are heavily used, however useless or superseded by something else, unless they are very harmful. Deprecation in Java does not mean "unrecommended" but "absolutely do not use this if you want your program to continue working on future versions." Not only is it a compile-time warning, but JDK tooling even checks for uses of deprecated APIs in binaries and warns about them. Ideally, deprecated usages should break people's builds. In other words, deprecation is a big deal and not taken lightly. We might need another standardised term for "unrecommended" or "superseded".

The second part of the answer is that there are still good uses cases for heavyweight threads, that are backed by one OS thread. For example, as carriers for virtual threads or parallel streams, i.e. as approximations of CPU cores, and also for cases where FFI is used for IO or some other native interaction. This is very rare in Java, but very rare could mean "used only in tens of thousands of programs rather than tens of millions".


> The second part of the answer is that there are still good uses cases for heavyweight threads, that are backed by one OS thread.

Hold the phone. Didn't you just say a few comments up that I shouldn't think of a "Java thread" like an "OS thread"?

If I use a Thread, today, how do I know if it's corresponding to an OS thread? I understand that it will, theoretically, depend on the platform on which the JVM is running. But on a non-exotic platform (x86/ARM Windows, Linux, macOs, Android, etc), does one Java Thread map to one OS Thread or not?


I meant that a Java thread, like the List interface, is an abstraction with multiple implementations for you to choose from.

In OpenJDK (i.e. the Oracle implementation of Java) today, a Java thread is implemented as a wrapper of an OS thread; I believe OpenJ9, the IBM implementation, does the same. But the Java specification does not require it. Loom might change the specification to require it, or leave the implementation of "platform threads" (i.e. non-virtual threads) up to the specific Java implementation.


Are virtual threads preemptable? If not, then one use for OS threads is when running a bunch of compute intensive tasks and you don't want worry about stalling everything else. I suppose in that case you could just use a separate executor for those expensive tasks. I wonder how many devs will spawn tasks using the default executor and then wonder why things aren't working out so well? Will there be tooling to help identify such issues?


> Are virtual threads preemptable?

Yes, but the preemption operation is not currently publicly exposed. We're considering whether and how to expose it.

> I wonder how many devs will spawn tasks using the default executor and then wonder why things aren't working out so well? Will there be tooling to help identify such issues?

What those issues would actually be, in practice, is still unclear, hence our reluctance to expose forced preemption. People rely on OS time-sharing (what you call preemption) inside applications far less than they think. No scheduling algorithm can make a program that requires more processing resources than available to run well.


They are threads in the same way that Java's original green threads were threads and in exactly the same way that co-routines are user scheduled thread like objects. They are not proper OS threads scheduled by the OS. Kotlin's co-routine implementation does something very similar and will just use a Loom thread when that becomes available.

It will also use executors with multiple threads if you use the right dispatcher.

I'm not sure I get your point about colored functions. Under the hood the compiler generates what is basically a callback structure not unlike what you get in javascript (i.e. a promise). It gains features like callback hierarchies, cancellation, etc. Loom fixes this by forcing you to wrap things with a Thread. Same kind of mental overhead.


> I'm not sure I get your point about colored functions. Under the hood the compiler generates what is basically a callback structure not unlike what you get in javascript (i.e. a promise). It gains features like callback hierarchies, cancellation, etc. Loom fixes this by forcing you to wrap things with a Thread. Same kind of mental overhead.

The difference is that I'm not allowed to pull apart the callback structure that colored functions (in Kotlin) cause the compiler to make. Instead, I just have a colored function I have to work with. And it's incompatible (in one direction) with functions of the wrong color.

Colored functions don't compose well. The green thread approach allows you to call anything from anywhere and compose whatever you want. When you choose to make it asynchronous, then you wrap it in a green thread and fire it off. That's pretty different than having two juggle different colored functions AND still needing to fire them off via whatever coroutine laucher.


It's like saying the article goes out of its way not to mention Scala/Haskell's IO type. Syntactic coroutines, monadic IO, and threads are different constructs, although they are different ways to address a similar problem -- expressing sequential (and, in contrast, parallel) composition. Virtual threads are Java threads; there's nothing "bolted". Syntactic coroutines are a kind of syntactic code-unit similar to subroutines.

Which one you prefer as a coding style is a matter of taste, but threads have some objective advantages over syntactic coroutines that go beyond syntax. For one, they don't require a split API (C# and Kotlin have two copies of their synchronization and IO APIs that do the same thing but are intended for different kinds of units, subroutines or coroutines); for another, they seamlessly integrate with the platform and its tooling, allowing use of exceptions, debuggers and profilers with little or no change to those tools. The Java platform -- the standard library, the VM, and its built-in profiling and debugging mechanisms -- has been designed around threads.

BTW, Java's strategy for targeting platforms like iOS and, later, the browser, is through AOT compilation of Java bytecode using things like Native Image (e.g. https://gluonhq.com/java-on-ios-for-real/). This allows you to employ the standard library as well. Kotlin's approach is different, and requires different libraries when targeting different platforms.


> Scala/Haskell's IO type

Since you've brought out Haskell's IO and Scala libraries' IOs (Monix, ZIO, Cats Effect), or F#'s Async, I think it's worthwhile to point out how they're different from the async-await approach that's in C#/Kotlin/Rust/etc.

They both require "special" handling -- in C# it's the `async`/`await` syntax, in Scala it's the flatMap function or for-comprehension -- there they are similar. But their meaning is different. IO/Task in Scala doesn't represent a possibly started and under-way computation; it represents a "dead" program, yet to be started, a mere value. And it has all the advantages that mere values have, like refactoring (extract variable, ...) or restarting in case or failure and so on. Pass it into a method, return it from a method, store it in a data structure/collection, create `IO[IO[X]]`, whatever you want, just like you would with `Option[X]` or `List[X]`. In Scala, you have to differentiate between `A => B` and `A => IO[B]`, because `B` and `IO[B]` are different, but both are still values. Your program then ends up being this one big IO/Task value, which is then executed "at the end of the world". These pictures illustrate it quite well:

https://twitter.com/impurepics/status/1182946618280153094

https://twitter.com/impurepics/status/1180064851219144704

On the other hand, async-await has none of the benefits, only downsides. You get functions of two colors, but no benefit in return. It's justifiable in Rust, because Rust aims for zero-cost abstractions. But for Kotlin/C#, it's a sad choice.

The Loom approach for Java is a reasonable one. No async-await shenanigans, no funny FP/Haskell/IO business. You just use threads for concurrency as God intended them and you can have gazillions of them, because they are M:N. And I respect that, even though I'm partial to IO/Task for the reasons outlined above.


> Loom approach for Java is a reasonable one. No async-await shenanigans, no funny FP/Haskell/IO business.

There's another important piece of the puzzle (putting aside any debate over the value of values): software is much more than syntax. When I think of some software construct, I don't just think how I can express and manipulate it in code, but also how I can express it and manipulate it while it is running and after it's run, i.e. present it in a profiler or assist troubleshooting when something goes wrong. In other words, how to make it a traceable, contextual entity.

The "process" construct on the Java platform -- regardless of the frontend language used to write the program -- is not that of an IO type, nor is it a syntactic coroutine; it is the thread. The JVM constructs stack traces for threads; it emits profiling, monitoring and debugging events for threads; its semantics of single-stepping follow threads; its GC heuristics are or can be designed around threads and even the JIT compilers perform optimisations that are implicitly based on threads [1]. In other words, the syntactic construct is just a part of of the problem, and the goal was not just to find a good fit for the language, but also all of the other important aspects of software. Adopting any other alternative would have required introducing that new concept into all layers of the platform as well. We're building a platform, not just a language.

Just to give you an example of a design issue we're struggling with now because even the mere number and duration of threads changes some of their runtime aspects: how do we perform thread dumps in a way that would tell the user what they want to know, i.e. what the different parts of their application are currently doing? Merely dumping a million stack traces probably wouldn't do the job, even if we grouped and deduped them. If we were to introduce a new kind of process, such "program dumps" would be the least of our worries; just the coordination with debugger/profilers/APM vendors would be a multi-year project.

> https://twitter.com/impurepics/status/1180064851219144704

OT, but, as someone who's interested in analytic philosophy and formal languages, such incorrect usages of "referential transparency" are a pet-peeve of mine. Java is more referentially transparent than Scala (at least Scala 2.x) because it doesn't have macros. It does not mean "an expression can be replaced by its value without changing [the value of the value of an enclosing expression]" but "an expression (term) can be replaced by its reference (aka denotation) without changing the reference of its enclosing expression;" hence referential transparency -- a syntactic term is a transparent representation of its reference, something that isn't true once you have macros. It's just that if your language is a pure-functional one, i.e. one with value semantics, i.e. one whose term references are values in the object domain of the language, then a reference is a value. That incorrect usage of referential transparency is nothing but a synonym for being pure functional (or of "value semantics") rather than a feature of it.

> It's justifiable in Rust, because Rust aims for zero-cost abstractions.

It's not that async/await is inherently more "zero-cost" than user-mode threads; they're just like that in Rust given its peculiarities. A different syntactic construct allows them to restrict recursion and virtual dispatch that would make stack size non-deterministic and interfere with their chosen memory-management strategy.

[1]: The compiler inlines calls (which are on the same thread); it doesn't inline multiple monadic stages (which often entail some megamorphic callsite).


It's more than that. Java developers are switching to Kotlin; notably on Android and Spring code bases (well over 50% of backend Java projects at this point apparently).

This is an article about structured concurrency and how Loom implements something that Kotlin has shipped that both Android and Spring heavily integrate with already.

So, there's more than a casual relation between Kotlin co-routines and Loom. I'd go as far that they apparently took a really good look at it and somehow ended up with something that essentially implements almost 1 to 1 the same kind of things.

Roman Elizarov (tech lead for co-routines, and recently the whole language) has been talking about structured concurrency a lot. I don't think he invented the notion but it definitely is what co-routines is designed to do.

So, when I see an article talking about how great Loom and structured concurrency is mentioning several languages but yet somehow glossing over Kotlin, I call it out as lame. Oracle no doubt has good reasons to not want to talk about Kotlin. But to me it is clear they consider it a threat. It's not the first time that they celebrate a few features new to Java where they mention other languages as influence and yet not mention Kotlin. Recent introduction of Records is a good example.

The Threading APIs date back to the late nineties and are full of complex stuff that you need to be aware of if you go near them if you care about avoiding all sorts of interesting categories of bugs, issues, and pitfalls. So much that I'd treat the occurrence of import java.lang.Thread as a giant red flag in a code review. Typically, it's a rookie mistake to use that. You shouldn't have to; there are better APIs.

Kotlin, btw. treats threads just as a special case of co-routines. You can have co-routine dispatchers that represent a particular executor. This is how you separate slow blocking IO things from e.g. CPU heavy stuff, or UI event handling. So, there is no API split in Kotlin. Some co-routines use a threadpool, some don't.


> notably on Android and Spring code bases

Of course they are, on Android, Google is doing everything that they can to push Kotlin down developer throats, most Java developers end up moving to Kotlin when the option is an half baked implementation of Java, called Android Java, stuck in Java 8, when we are already on Java 15, and Google is unwilling to move Java support any further than what they are forced to due to InteliJ own changes.

Spring, well they only care about marketing and getting Spring sales, I still remember when Groovy was supposed to take over the Java world and also had first class support in Spring, just like they are selling Kotlin nowadays.

I got some nice consulting gigs porting those projects back to Java, and the same will happen with Kotlin projects when Google gets fed up with Android and moves on to Fuchsia/Flutter.


> So, there's more than a casual relation between Kotlin co-routines and Loom.

Not beyond the fact it was one of the things we looked at and decided to go in a completely different direction. We looked at Python, Go, JavaScript, C#, Kotlin, C++, Rust, Zig, Haskell, Pony, Scheme, Erlang, Céu, Scala and Clojure, and decided not to go down the C#/Kotlin path, which is why we ended up with a solution that is nothing alike. The positive influences were Erlang, Go, and Scheme (with a glance to Céu). That's why virtual threads share a lot of similarities with Erlang processes and Go goroutines, and borrow ideas from Scheme's (and OCaml's) multi-prompt delimited continuations, but are not at all like C#/Kotlin's syntactic coroutines.

There are certainly things about Kotlin we love, like nullability types, and that we'd like to see in the Java language some day. When we do, Kotlin would have been the influence. Syntactic coroutines, on the other hand, were something to avoid.

> Java developers are switching to Kotlin

Just as they had to Scala in the past, many developers are switching to Kotlin, which has reached ~2-4% on the Java Platform, and might even reach 5-7% some day. That is huge, perhaps unprecedented, market penetration for a Java Platform language, but let's not get carried away.

There are certainly languages we consider serious competitors, but Kotlin is still an order of magnitude in size away from being one of them. Java is so big that you can still be 20-50x smaller and still be a very popular language, yet not quite a competitor.

> Recent introduction of Records is a good example.

Nope. First of all, Kotlin doesn't have records. Kotlin-like data classes were something we looked at (we look at all languages) and said, we don't want that, we want records. Second, the inspiration for records was ML. So records are actually yet another example where we decided not to go in the same direction as Kotlin. This isn't to say Kotlin did something worse or better, but it did do something decidedly different.

> Typically, it's a rookie mistake to use that. You shouldn't have to; there are better APIs.

The same APIs Java users use today (Executors) are the ones they'll use when Loom lands. There is still no need to use the Thread API directly.

> Kotlin, btw. treats threads just as a special case of co-routines.

Perhaps in your desire to see Kotlin influences everywhere (and, BTW, syntactic stackless coroutines were done in C# first, at least among the well-known language) you misunderstand what Loom is. Virtual threads are threads, period.

Anyway, it's perfectly fine to prefer Kotlin over Java (the language) just as it is to have the opposite preference. But I think you misjudge the influence those languages have and draw from.


> Syntactic coroutines, on the other hand, were something to avoid.

Could you articulate or point me to some of the arguments or thoughts that lead to the conclusion here? I know and understand the term "colored functions", but I'd love to read a real analysis of the pros and cons of colored functions, because I personally go back and forth on whether I think they are bad or good. On the one hand, having a function that is explicitly marked as "this thing needs to be treated specially because it may block for a long time" is actually kind of nice. On the other hand, it's hard to write generic functions/tools when you have to handle different colored closures, for example. Also on that hand is the fact that I can still write a function as the "wrong color" if I'm inept.

Or was that decision based on something other than language semantics?

> There are certainly languages we consider serious competitors, but Kotlin is still an order of magnitude in size away from being one of them.

Just out of curiosity, what are those languages? C#? PHP?

Also, this statement makes you sound like Goliath. I don't disagree with you that Kotlin, in particular, is probably not a huge "threat" to Java. But you're slinging numbers like a politician or a PR person: "Kotlin, which has reached ~2-4% on the Java Platform, and might even reach 5-7% some day". Java has 25 years of legacy- are you seriously saying that picking up 5% of JVM code in < 10 years from the HOST LANGUAGE OF THE PLATFORM is not a little unnerving? Also, do you only care about the JVM? What if everyone switch from Java to C#? Would you still brag that "90% of code on the JVM is Java" even if 0 new projects started choosing Java? I mean, those are unlikely events, but I'm just saying "there's lies, damn lies, and statistics".

The truth is that until pretty recently, Java was really lagging behind and many devs were cursing their fate that they were still working on Java projects. Java 8 was a huge leap forward, and Java 15 is another sizeable leap. So I suspect that Java will stop the bleeding. But it was not at all guaranteed to stay a behemoth, IMO.

> Nope. First of all, Kotlin doesn't have records. Kotlin-like data classes were something we looked at (we look at all languages) and said, we don't want that, we want records. Second, the inspiration for records was ML. So records are actually yet another example where we decided not to go in the same direction as Kotlin. This isn't to say Kotlin did something worse or better, but it did do something decidedly different.

I haven't used them yet, but I think that records look better than Kotlin's data classes. But the only difference I see is that they don't auto-generate a `copy()` method, like Kotlin's data classes do. I believe that was a big mistake on Kotlin's part. Is there some other way that Java records are different than data classes? Because if that's the only difference, it sounds really disingenuous to suggest that records are not inspired by data classes. Like, what are the odds that it took until 201x for you (all) to decide to copy records from ML? You certainly didn't do it before Kotlin for some reason.


> Could you articulate or point me to some of the arguments or thoughts that lead to the conclusion here?

I gave a talk about exactly that recently at Code Mesh. I expect them to post the video soon. https://codesync.global/speaker/ron-pressler/#745why-user-mo...

> Also, this statement makes you sound like Goliath.

Maybe, but it's not just a matter of size but also trajectory. And, as you say, our competition is mostly off the Java platform. My point wasn't to brag, but to put matters in perspective. There are dozens of nascent languages, the vast majority of which will never make it to the top ten, let alone the top five. And while a few of them will, no doubt, one day unseat the incumbents, we obviously don't think of all of them as "threats."

> what are those languages?

The obvious ones. It's not that we don't know that some smaller languages will one day become big, some will even surprise us all, but extrapolating early growth to long-term success is certainly not a good model.

> Like, what are the odds that it took until 201x for you (all) to decide to copy records from ML? You certainly didn't do it before Kotlin for some reason.

As I wrote in another comment, Kotlin's arrival and Java's "boost" are both a response to the same event: Java's decline in Sun's dying days. Once Oracle increased investment, it was mostly a matter of prioritising which features to do first. Records were seen as less urgent as lambdas, so they came later.


You didn't answer a few of the interspersed questions, so I'll press you on the records one.

What about Java records are different than Kotlin's data classes besides forgoing the auto-generated `copy()` method? I understand there are some implementation details, such as inheriting from a Record base class, and how it handles serialization. But I mean as a user.

Can records implement interfaces? Can records be variants in a sealed class hierarchy? Can records have a private primary constructor? Can I customize the getters (to e.g., make defensive copies)?

For other people reading, the answers for Kotlin data classes are: Yes, Yes, Not Really, No.

As an aside, I understand you are an Oracle employee. When you post under this username are you acting in any kind of official capacity for Oracle? Like, is part of your job, so to speak, to have a social media presence? I'm not suggesting anything negative- it's totally fair to invest in outreach, to answer questions, clarify things, etc. I was just curious if I'm talking to "a guy who loves the project he works on" or "a representative of a company".


> What about Java records are different than Kotlin's data classes

Java records, like enums (another feature that is philosophically very similar to records) aim not to reduce the boilerplate of certain operations, but to designate a subset of classes with particular semantic properties (and make those easy to express). In the case of enums, that subset is classes with a well-known, fixed set of instances; for records that is nominal tuples, i.e. immutable, unencapsulated data aggregates, similar to ML's product types. So users, the compilers, and libraries can make certain assumptions about records. For example, their semantic properties (that they are no more than a product of their component types) allow a much better serialization story for them and, indeed, record classes are serialized differently from non-record classes: Instead of invoking a no-arg constructor, their canonical constructor is invoked on deserialization. Their immutability also makes automatic implementations of equality, deconstruction and pattern matching clear and correct.

Just like Kotlin couldn't do user-mode threads efficiently because they have no control over the platform, there was also little point in doing records, because that language's goal was to make it easier, syntactically, to work with the existing Java ecosystem, and, prior to records, Java programmers worked with JavaBean-like classes, so Kotlin tried to make those operations more syntactically pleasant. Java's designers, however, can have an impact on what that ecosystem does and can change its direction.

> As an aside, I understand you are an Oracle employee.

Yes. I work on OpenJDK.

> When you post under this username are you acting in any kind of official capacity for Oracle?

Absolutely not. I speak only for myself. I'm the technical lead for Project Loom, and I want to see what kind of reactions people have to it on social media (as well as conferences, customer meetings, surveys etc.). I guess I see it indirectly as part of my job, at least as far as Loom goes, as these interactions help inform how we explain the capabilities, what features people want etc.. It's nothing official, though. It's also a harmful personal addiction.


> "a guy who loves the project he works on"

Yes, he is Ron Pressler, project lead of Loom - https://twitter.com/pressron


> Can records implement interfaces?

Yes.

> Can records be variants in a sealed class hierarchy?

Yes.

> Can records have a private primary constructor?

The accessibility of the constructor is roughly at least that of the record class itself. If the record class is public, then the canonical constructor must be public. However, thanks to the positive answers to the two previous questions, you can have a readable-though-not-publicly-constructible record -- make it a private implementation of a sealed public interface.

> Can I customize the getters (to e.g., make defensive copies)?

Yes. Although in most cases, defensive copying in the constructor is sufficient.


As I see it, the main difference is that existing code will magically work with Loom and will require rewrite with coroutines. I can get Oracle 9i JDBC driver from 1999 and use it under Loom and it probably will work just fine. Probably Oracle will not rewrite its JDBC drivers with Kotlin coroutines any time soon.


Well, Spring's r2dbc (reactive db abstraction) works great with co-routines on top of Spring's reactor framework. The integration for that is well supported by Spring. Drivers that support asynchronous IO, at this point include most obvious mainstream databases. I'd expect that stuff will work with Loom as well. Though we may have to wait a while for Spring to release support for that.

Oracle has not gotten around to supporting Oracle DB with r2dbc just yet apparently: https://stackoverflow.com/questions/58813658/r2dbc-oracle-da... but it appears to be in the works. Given that they are developing Loom, they probably aim to have that stuff working together perfectly with their own DB. If/when it does, it will work with kotlin co-routines as well. Basically, if it runs with Java, you can use it from Kotlin and trivially wrap it with a co-routine. I've done this recently for several other database that support async IO with some callback mechanism.

That old Java driver will also work fine with Kotlin & co-routines as well However, you probably want to be using the IO Dispatcher to ensure you have enough threads to deal with it blocking your process thread. So your server doesn't hang. That kind of is what structured concurrency is about. Of course, most blocking IO database drivers would typically use some connection pool backed by a (real) thread pool. Not sure how Loom 'magically' deals with interrupting IO blocked virtual threads but I have hunch that just means everything on the underlying OS thread ends up being blocked. Using virtual threads in a connection pool is probably going to end in tears. I'm not aware of any magic that Loom provides that addresses that other than just allowing you to use either OS threads (real?) or virtual threads (aka. fibers, co-routines, green threads, light weight threads, etc.).


There is an important difference with Kotlin coroutines. In Kotlin you still have the problem of colored functions [1], those marked with `suspend` and the regular functions. You can't call suspend functions from regular functions and the world is divided in blocking and non blocking APIs (e.g Thread.sleep() vs delay()). And then you have to use things like `runBlocking` to bridge these two worlds.

If I understand correctly, Loom breaks that wall completely. You don't need to mark functions as `suspend`, the runtime is just smart enough to do the right thing. For example, if you call Thread.sleep() on a regular OS thread, then that will block, but if you run it on a light weight thread, it will suspend instead, allowing the runtime to use the OS thread for another task.

And there is one more thing: because Loom is implemented at the VM level, that means that when using Lightweight threads you get all the good things you typically get with regular threads: Proper stacktraces and native debugging and profiling support.

[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


> Since the article goes out of its way to not mention Kotlin,

"Since the article goes out of its way to not mention [Rust|Kotlin]" and "I'm surprised this article has no mention of [Rust|Kotlin]" must be one of the most frequently used templates on HN.


The difference is that Kotlin has been gobbling up Java users using Java frameworks like Android and Spring. Coroutines are used in both and implement structured concurrency exactly like Loom does. It's not a little bit similarish and vaguely related: it maps 1 to 1 conceptually; well at least for the stuff that Loom actually implements.

Rust indeed would deserve a mention as well but it follows a somewhat different approach to the same problem. But Kotlin is somewhat special here as Oracle is bleeding users to specifically Kotlin.


I think that Kotlin's existence has really given Java a kick in the pants. I agree that there has been a lot of people picking up Kotlin because of how verbose and "last century" programming in Java feels. Modern (static typed, imperative, algol-like) languages are so much more ergonomic (Rust, Go, Kotlin, Swift, TypeScript, etc), that it's kind of painful to go back to Java, with its silly "everything has to be an object, except we still have unboxed primitives for some reason", etc.

On the other hand, to their credit, the Java devs (Oracle, I guess) have really stepped up their game in response. Java just (fucking finally) got "records" (better than data classes, IMO), and sealed classes. Soon they'll have virtual threads. And some day they may actually have what I think they call "value types" (bad name), which will be excellent news.

With these things, they've implemented basically all of Kotlin's "must have" features except for null safety and clean, sexy, syntax. If they ever tack on some kind of "strict null mode" to Java, that might just be the end of Kotlin. Everyone who wants a functional JVM language will go to either Scala or Clojure, and the rest can stay on Java. That is, unless of course, the Kotlin guys pull more tricks out of their sleeves. But, in all reality, the entire purpose of Kotlin seems to have been to have a middle-ground between Java and Scala while learning from the mistakes of both. If those languages also learn from their own mistakes (they are), then Kotlin is in trouble.


Kotlin was created circa 2010 as a result of Java's stagnation in Sun's dying days. Oracle has since increased investment in the platform. So Java's resurgence and Kotlin's appearance are both a response to the same event rather than one causing the other.


> ergonomic (Rust, Go, Kotlin, Swift, TypeScript, etc), that it's kind of painful to go back to Java.

I really don't know how you can include Go here. It's extremely unergonomic to manually write for loops, and to deal with pointers and references.


That's true. And honestly, I don't personally like Go. I included it out of respect. :)

But the whole static duck typing thing is really nice and much less awkward than Java-style interfaces, IMO.


Kotlin is just yet another guest language on the JVM, written in a mix of Java, C++, no Kotlin code around, neither today nor tomorrow.

Google is pushing Kotlin as hard as they can on Android, while leaving Android Java dialect stagnate on a pseudo Java 8 compatibility, they have a political agenda to play here.

Spring just goes after any shiny thingie that might bring users into their domain, Groovy, Clojure, Scala, and now Kotlin.


What makes you think that Kotlin should be mentioned? Especially since concurrency in Kotlin is really not great, compared to concurrency in Erlang or Scala.


I'm not familiar with concurrency in Erlang or Scala, could you explain how it differs from the approach in Kotlin (which I know)?


I've posted above how Scala's IO/Task is different from C#/Kotlin async-await.

https://news.ycombinator.com/item?id=25305574

I hope you will find it helpful.


Thanks! Really useful.


For starters, because the concurrency story in Kotlin is also built on the structured concurrency idea.


Well, Kotlin is not the only language. :)

The same is true for Scala and other languages. Languages that even existed before Kotlin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: