I first encountered Algebraic Effects in Unison where they're called "abilities" [0] via the strangeloop talk from 2 years ago [1]. Just from the little I've seen of it I feel like AE is a fundamental abstraction tool that's been missing in programming language design. "Fundamental" as in the same level as function arguments. So many problems that were solved with myriad complex programming language constructs are just absorbed as a trivial user-implementation with an effect system: exceptions, sync vs async, dependency injection, cancellation tokens, dynamic contexts... all of these problems where the essential complexity is a need to have an effect that cuts through the function call stack.
I'm not saying that all our problems are solved and the programming world will now be rainbows and butterflies, I'm just saying that this feature is the correct framing and abstraction for issues we've run into many times in the past, and it has the potential to greatly simplify and unify the hacky, bespoke, situational solutions we've found.
Java has always had checked exceptions, a weak form of type-checked effect. They were controversial because developers didn't like being forced to handle them, but I always thought they were a great idea. Algebraic effect handlers just generalise the idea of an exception, by providing a continuation that can be called to resume execution.
The problem with Java exceptions is that they are used to paper over the lack of multiple return values for totally mundane situations where there are genuinely multiple possible outcomes. "tried to open file that doesn't exist", "tried to open a socket to a domain that couldn't be resolved", "user doesn't have permission to perform that action", etc, are normal failures not exceptional. But all of these totally normal outcomes are mediated by the same language feature that also deals with indexing past the end of an array or dereferencing null, both of which are clearly program bugs. That's why checked exceptions were controversial: they were a noisy workaround for proper language tool to manage multiple outcomes. Go takes a small step towards fixing this by making packing and unpacking tuples easy and normalizing returning an error as the last tuple value; rust and other languages with discriminated unions and an integrated match actually solves this.
I guess if it helps you understand typed effects if you describe it as "java checked exceptions with an option to resume" then I'm glad that works for you, but for me, Java exceptions have so much other baggage surrounding their design that I would prefer describing it from the other direction: "typed effects would enable you to implement a host of cross-stack features, including a checked exception system like Java's".
I am not advocating Java the language and it's shortcomings are really the topic of another thread. I am also not seeking to understand algebraic effects starting from Java, I've read the original Eff paper and would encourage others to do so. I raised them only as an example of a form of type checked effect that is already in widespread use.
Adding some more related articles. this was mostly a result of me trying to find some more useful articles to better understand and it was lost in my browsing history.
And this is how it should be. Not a language feature, but library. Dealing with language feature you deal with compiler and may affect more people than needed, with library you can use (and extend) it as you wish.
> Note that OCaml 5.0 focuses on minimal (solid) support for the multicore runtime system, and will not provide stable user-facing concurrency and parallelism libraries.
Progress + report on OCaml multicore has been exceptionally well done for many months [0]? Quite big piece of work, impressive to see it materialising as general public availablity. I wouldn't be worried about higher level abstrations at this stage. Well done runtime part is the most important now.
Thanks for the article. OCaml has long been on my short list of languages to learn, and continuations are an hobby of mine. I'll have to dig into this deeper.
If someone has experience with algebraic effects, I have a question to ask. Why are they needed at all as a type system extension and why can't they just be represented with function types? (excuse my Haskell pseudocode, I'm just a filthy C++ programmer abusing the notation I don't really know the language, also don't assume lazy semantics):
newtype Effect a = Effect (a -> Effect a)
newtype EffectHandler a = EffectHandler (() -> (a, EffectHandler a))
A function with an effect would have an explicit Effect parameter, while an effect handler would take an EffectHandler as a parameter (and return it as well). You could also add phantom types if you really need to distinguish different effects beyond 'a.
The only magic would be in the function that creates the continuation:
typed_callcc1 :: (Effect a -> Effect a) -> EffectHandler a
Of course you could generalize Effect and EffectHandler into a bidirectional typed continuation:
newtype Cont a b = Cont (a -> (b, Cont a b))
I don't pretend to fully understand algebraic effects but from what I see they are pretty much equivalent, except that there is no explicit effect parameter, just the type (so the continuation is not exactly first class and it is logically passed implicitly). For similar reasons, I think you can't hide them in a closure. What is the gain? What am I missing?
You wouldn't be able to use normal code, ie. loops, if statements, pattern matching etc. What you're trying to describe is monad'ish like promise or simply callbacks. Algebraic effects are much more general, code is normal sync like code, you can have async/await semantics without function coloring, you can customize code with dependency injection like behaviour ie. you can define logging effect in your library without specifying which logging library your user has to use as caller can customize it etc. It's not as much type system extension as new language construct – it's simply try/catch'ish yield burrito :)
See my reply to the siblings comment with the working c++ example. The call to the effect function is not a tail call: the effect function will eventually resume its caller, providing the next continuation to call. Definitely there is no colored function problem. So you will be able to yield from a for loop just fine.
As far as I understand effects are similar to delimited continuations in the way the effect handler is found via dynamic scoping, but in addition there is an extension to the type system to guarantee that at least one effect handler of the correct type handler is in place.
So I'm wondering if it wouldn't it be better, or at least equivalent , to simply pass the continuation around (i.e with lexical scope) as a first class value and attach the effect type to it, obviating the need for an ad hoc type system extension.
I'm must be missing something and there must be some use cases that can't be easily expressed this way.
I think I don’t understand your types. The Effect type you define appears to be, essentially, a function that takes infinitely many arguments of type a.
Let’s imagine two simple effects. One prints a string (I’ll call this ‘printer’) and one reads an int entered by the user (let’s call it ‘reader’) In this case, how would those effects be modelled with the types you wrote?
But it still isn’t obvious to me. If you want that IO to be asynchronous then how will you return the Effect Int (by calling the argument with the input) from input_int? I suppose the answer is that you implement a scheduler but I can’t work out how you want the details for yielding to work.
It is not like a continuation, it is exactly a (typed) continuation, or better, an infinite list of continuations (invoking the continuation yields, in addition to a possible value, the next continuation in the stream).
In your example of reading an int the EffectHandler and Effect are simply switched (better names are sink and source). And yes, for IO you will need a scheduler, but streams are much more straightforward.
I have reached my limit of pure functional language knowledge, but I can offer you a working implementation [1] in an imperative language.
I've actually implemented these typed continuations in c++ years ago, and I'm trying to understand how they differ from effects (aside for the whole imperative thing).
In the C++ implementation, for convenience the continuation object is replaced with the next continuation when invoked, but internally actually invoking a continuation function returns the yielded value and the next continuation as for my EffectHandeler example.
Something I rarely see addressed: why was multicore ocaml blocked on having full-fledged effects? Couldn't multicore have landed years ago, and then gradually insert effects in the language?
Multicore upstreaming wasn't blocked on having fully-fledged effects. If you look at the diff between multicore 5.00 and trunk OCaml, the changes required for fibers is pretty small relative to the multicore GC and making the rest of the runtime thread-safe.
The original plan was to upstream only the multicore GC. This was sped up on the suggestion of the core developers and now 5.0 will bring parallelism and effect handlers (though without syntactic support for the latter).
Wow Algebraic Effects are a totally new thing i never seen mentionned so far
I always thought that async /await was the best known way to handle resumable computation
For more mind-blowing stuff, try learning F#, where async/await as a language feature is something you could implement entirely in user-space (though of course you need to access the .NET APIs if you want to implement parallelism). F# has "computation expressions", which allow you to define syntax for fully-general "monad-like things", and the built-in `async` computation expression is just a part of the standard library and is defined using that mechanism.
I imagine exposure to algebraic-effects systems must make one feel the same way: like it's such an awful hack when a language has to have async support baked into its syntax!
Effectful concurrency feels so different from the async/await model I'm used to, but I'm really looking forward to playing around with it when OCaml 5 drops. Does anybody here have a more "ELI5" tier guide to it?
Thanks for the links, that was a very succinct talk and made me excited to try it out. Just one follow-up question: the 4.12+domains branch doesn't include support for the try syntax for handling effects, correct?
Correct. The 4.12+domains branch has effect handlers without syntactic support (which is what will be in 5.0), the 4.12+domains+effects has the syntax.
I've actually been playing with a similar idea in JavaScript, having pure functions generate "Plans" for async actions which are then executed later by other code. They can be thought of as Promises that haven't happened yet.
A neat side-effect (no pun intended) of doing things this way is that, unlike Promises, Plans can be stored as constants (or cached) and re-used multiple times.
I'm sure it's nowhere near as advanced or flexible as the OP, but it seems to be in the same general spirit
Yes, you "can" somehow emulate it with async generators everywhere but your js code will look more like brainfuck than js. It really requires language construct, similar to how yield, try/catch or pattern matching can be simulated without those constructs but it's going to be total disaster with no language support.
Doing things in an IO monad, you don't distinguish much between types of effect, everything's just in IO, and you just execute the action when you run into it, which means that you don't have, e.g., lookahead to see if you can do one batch request instead of 10 individual requests. There have been a few attempts to address these -
Monad transformers allow you to separate types of effects (so you can specify e.g., "this code only needs environment variables, not database access"), and, at least at compile time, select a different implementation for each effect. In Haskell, at least, though, they have a drawback of needing to define typeclass instances (interpreters) for every concrete monad stack (basically explicitly describe how they interact with each other - the n-squared instances problem. In practice, there's a bunch of template code to help mitigate the boilerplate).
Somewhat relatedly, Haxl, in an attempt to optimize effects, introduced a compiler change to identify less dynamic code (code that only needed Applicative), and Selective Functors, to allow for more optimization based on what's coming next.
Algebraic Effects (assuming I'm not incorrect to conflate them a bit with free effects/extensible effects) make things more dynamic, so you're instead effectively building the AST you want, and separately interpreting it at runtime. This should let you look at more of the call tree to decide on an execution plan. Since you'd also not be relying solely on the typeclass mechanism to pick an interpretation strategy, you should also be able to more easily describe how the interpreters compose, saving you from all the boilerplate of the transformers approach.
Monads are too specific, a lot of things that they are used for could be represented by weaker constructs such as Applicative.
See e.g.: https://www.microsoft.com/en-us/research/publication/desugar...
"Furthermore, 10,899 (28.0%) were fully desugared into Applicative and Functor combinators and thus would not require a Monad constraint." б) "The Haxl codebase at Facebook. [...] and 7,600 (26.9%) were fully desugared into Applicative and/or Functor."
Isn't Java capable of the same thing now that it has algebraic data types with records + sealed classes + pattern matching? Given that Java already has a fine concurrency story, isn't OCaml a hard sell to someone that's not into compiler development to depend on its rich ecosystem?
Though they share the word "algebraic", algebraic data types != algebraic effects. And while Java has good support for concurrency primitives and concurrent data structures, it does suffer from the problem highlighted in the article:
> Over time, the runtime system itself tends to become a complex, monolithic piece of software, with extensive use of locks, condition variables, timers, thread pools, and other arcana.
I'm not an expert on this, but my understanding is that the problem that algebraic effects tries to solve is to improve language semantics to make it easier to separate different levels of abstraction (e.g. separating the what from the how), while also encoding the performed effects into the type system.
In fairness to the JVM, the work on Project Loom would bring the JVM inline with what that document describes as the Erlang/Haskell "Hybrid threading" model.
This has nothing to do with the JVM. Scala for example is already capable of exactly what Erlang/Haskell do. This is merely about the language Java, which lacks support for to make such a programming style ergonomic. You either need a language with very powerful type-system or a dynamically typed language. (or specific support for it, like in Go, but even in Go you are limited to what the language designers forsaw)
Project Loom will not change that, but it will improve performance for certain scenarios.
Project Loom is both about the JVM and the language Java. Most of the work for Loom AFAICT is at the JVM level, and the benefits are that all Java code will Just Work(TM) with the new primitives underneath them at the end.
Scala’s varied async/concurrency libraries are implemented in user land, and still use threads underneath. Mechanically, you must opt in to these and have to work to interop with code that might use other primitives. Scala can handle a lot of this complexity at compile time w/ types, but it’s not perfect, and certain runtime behaviors will always be out of scope.
Loom improves this by allowing any language that runs on the JVM (Java, Scala, Clojure) to opaquely use virtual threads to run their existing synchronous, scheduler-unaware code on the new Loom concurrency primitives implemented in the JVM. That’s powerful!
Everything you say is correct, but it can create a misunderstanding, so I want to elaborate for other readers:
Concurrency is not mainly about threads or performance, it is about program behavior semantics. Loom does not do anything about that, it "merely" improves performance. Well, you could say it actually makes semantics worse (you used (TM) for good reasons).
In that sense, I believe that Scala or any language with good concurrency semantics will benefit from Loom more than Java, unfortunately. But Java can of course still catch up on a language level. Even after such a long time, there are still new and interesting libraries (e.g. looking at JOOQ).
In OpenJDK, Java threads are just thin wrappers around OS threads and OS threads are a very precious resource; a modern OS can't support more than a few thousand active threads at a time.
I'm not sure how one would get there with the JVM's memory model. you'd need something like actors and a preemptive scheduler per core at the VM level with a share nothing state between actors/virtual threads. Erlang utilizes message passing and immutability to do this.
Which is precisely why Java language specifications doesn't state if they are green or red threads, they just happened to evolve into red threads across multiple implementations.
Project Loom is bringing green threads back, now officially as virtual threads.
Additionally there is java.util.concurrent and for those that care to really go deep enough, custom schedulers.
> I'm not sure how one would get there with the JVM's memory model. you'd need something like actors and a preemptive scheduler per core at the VM level with a share nothing state between actors/virtual threads.
Part of what Project Loom is doing is bringing lightweight usermode threads, called "Virtual Threads" and it's own scheduler.
Importantly you don't need share nothing state or immutability to add preemption, the JVM already has points during code execution where it knows the full state of the program. They call these "safepoints" and they're important for the GC to work properly. With the current implementation of Loom virtual threads are preempted when they do any blocking I/O or synchronization, but there's no reason why in the future they couldn't preempt them at any safepoint.
>In OpenJDK, Java threads are just thin wrappers around OS threads and OS threads are a very precious resource; a modern OS can't support more than a few thousand active threads at a time.
I don't know what you define as a few thousand active threads, but running the following C++ code let me run 70,000 threads before I got a resource error:
Erlang processes are not OS processes. They are implemented by the Erlang VM using a lightweight cooperative threading model (preemptive at the Erlang level, but under the control of a cooperatively scheduled runtime). This means that it is much cheaper to switch context, because they only switch at known, controlled points and therefore don't have to save the entire CPU state (normal, SSE and FPU registers, address space mapping, etc.).
Erlang processes use dynamically allocated stacks, which start very small and grow as necessary. This permits the spawning of many hundreds of thousands — even millions — of Erlang processes without sucking up all available RAM.
Erlang used to be single-threaded, meaning that there was no requirement to ensure thread-safety between processes. It now supports SMP, but the interaction between Erlang processes on the same scheduler/core is still very lightweight (there are separate run queues per core).
Java's checked exceptions can be considered an effect system and coupled with ADTs I suppose we can call it an algebraic effects system. I mean the domino effect in which the exception has to be handled in each method that calls that is what seems to me the effect/handler counterpart that is present in Java. In the end an algebraic effect is just an extension of a type system that supports ADT, or is my memory from university failing me now.
The defining feature of these effect systems is "resumable continuations". Essentially, at the point where you catch an exception, you have the option of resuming the code which threw the exception, and you can tell it how to proceed.
So, whereas exceptions only jump backwards in the stack, resuming a continuation sorta lets you jump forwards again, back to where you were. It's really powerful stuff.
You cannot resume a computation with exceptions. At most, exceptions are a subset of effects. Similarly, a lot of the issues with checked exceptions in Java come from the lack of exception polymorphism in the checked exception type system. Adding the two points, you cannot really call checked exceptions an effect system.
I'm not saying that all our problems are solved and the programming world will now be rainbows and butterflies, I'm just saying that this feature is the correct framing and abstraction for issues we've run into many times in the past, and it has the potential to greatly simplify and unify the hacky, bespoke, situational solutions we've found.
[0]: https://www.unisonweb.org/docs/abilities
[1]: https://youtu.be/gCWtkvDQ2ZI