Hacker News new | past | comments | ask | show | jobs | submit login
Koka: Strongly typed functional-style language with effect types and handlers (koka-lang.github.io)
219 points by nateb2022 10 months ago | hide | past | favorite | 92 comments



Related. Others?

Koka: A fast functional programming language with algebraic effects - https://news.ycombinator.com/item?id=38421003 - Nov 2023 (2 comments)

The Koka Programming Language - https://news.ycombinator.com/item?id=28335043 - Aug 2021 (2 comments)

Koka: A Functional Language with Effects - https://news.ycombinator.com/item?id=27710267 - July 2021 (12 comments)

A Tour of Koka (an elegant programming language with Algebraic Effects) - https://news.ycombinator.com/item?id=26292411 - Feb 2021 (1 comment)

An Introduction to the Koka Programming Language - https://news.ycombinator.com/item?id=14647415 - June 2017 (1 comment)

Koka – A function-oriented programming language - https://news.ycombinator.com/item?id=10131071 - Aug 2015 (10 comments)

Koka a function oriented language with effect inference - https://news.ycombinator.com/item?id=4407415 - Aug 2012 (1 comment)


Some of the research behind Koka was discussed at:

FP2: Fully In-Place Functional Programming [pdf] - https://news.ycombinator.com/item?id=36471591 - July 2023 (24 comments)

Perceus: Garbage Free Reference Counting with Reuse [pdf] - https://news.ycombinator.com/item?id=25464354 - Dec 2020 (44 comments)

Implementing Algebraic Effects in C - https://news.ycombinator.com/item?id=14887341 - July 2017 (21 comments)


Nice. Thanks!


Nice to finally see this get some attention on the front page.


> Effect handlers let you define advanced control abstractions, like exceptions, async/await, iterators, parsers, ambient state, or probabilistic programs, as a user library in a typed and composable way.

> Perceus is an advanced compilation method for reference counting. This lets Koka compile directly to C code without needing a garbage collector or runtime system! This also gives Koka excellent performance in practice.

Effectful functional language that compiles to C? Sounds great.


I've only played around with it a little bit, but it really is a cool language.

I particularly enjoyed this presentation, which is what sold it to me as a good idea worth spending some time on: https://youtu.be/6OFhD_mHtKA

OCaml 5.0 also interestingly includes an effect handler system: https://v2.ocaml.org/manual/effects.html


I’ve been following the road to OCaml 5 and the effect system is nice, but getting to have effects without bringing in a garbage collector seems quite special. I’m working on a C-WASM wrapper library, and one issue I struggle with in C code is dealing with async IO outside the WASM module. I haven’t looked deep into the FFI story of this new language but it would directly solve this problem that currently needs the ASYNCIFY Binaryen transform which adds a 2x size increase and 50% performance penalty. Can Koka call C in an effect? Can it suspend a stack if C calls into it? I can presumably drop koka into my existing C-WASM project with a few new Makefile rules. These are definitely places OCaml can’t go; by compiling to C Koka makes these advanced FP features much more attainable (at least, for me).


Koka is garbage collected. There are several terminology ambiguities in play: the documentation takes "garbage collection" to refer to tracing garbage collection as opposed to runtime cost of any sort, which while common in some circles seems less so common as a whole (this is extremely confusing, all the time). This runtime overhead is reference counting and so is going to be a whole lot nicer to deal with (especially re: C FFI) than tracing, but it does exist.

The reason Koka's GC is interesting despite being based on reference counting is that its ownership system eliminates most of these reference checks at compile time - and additionally can tell whether to use atomic RC (slow but threadsafe for shared data) or non-atomic RC (fast but only threadsafe when data is moved across threads, not shared). This ownership analysis is very similar to what some other languages like Nim do (except Nim differs in not allowing atomic RC at all).

The other strange terminology that is occasionally tossed around in Koka documentation is "garbage free": Koka takes this to mean that at any given point in the program, there is no memory waiting to be freed. This is because the ownership analysis lets the compiler know exactly where the last use (or possible last use) is and insert destructors accordingly. All of that has made Koka's GC algorithm fast and low-overhead enough that it's competitive with state-of-the-art tracing GCs (specifically, OCaml's GC). I haven't seen benchmarks comparing it to manual memory management or strict ownership systems but that's not terribly the point - manual memory management is unsafe and strict ownership is complicated + inexpressive on occasion. Koka's system might just be the best you can get, with those tradeoffs in mind.

Anyway, this doesn't answer your question at all. Sorry. I hope it's interesting, though.


I’m on their side of this semantics issue - I don’t consider reference counting to be garbage collection, to me it’s a stand alone group of memory management techniques, even if there is some overlap in implementation techniques with (tracing) GC sometimes. It’s in the same category as Swift, right? No runtime, different thing.


Well, it has a runtime. Checking and updating the reference count (more so the latter) is not zero-cost. This runtime is just deterministic. Of course, then do we have the same definition of runtime (I would take it to mean any extra memory or processor overhead at runtime that is not strictly necessary)... naming and consistent naming is an extremely hard problem in computer science.

It's in the same category as Swift, yes, but much improved: Swift does not do ownership analysis to get rid of counts (though I've heard they're looking at alternative region-based approaches), and their counts across threads are always atomic (and thus slow).

Reference counting has traditionally led to worse performance than tracing. So even though I get the desire to think of it as separate because it's just transparently replacing your allocator / deallocator with one that does a little bit more instead of having a whole separate tracing collector program, I'd still probably refer to both tracing and reference counting as "garbage collection", and then refer to them + ownership systems (+ regions + everything else new) as "memory management techniques".

The overlap in implementation techniques between tracing and reference counting is interesting. You might enjoy this paper: https://dl.acm.org/doi/10.1145/1028976.1028982


They have a C library for writing effect handlers in C, so you could potentially just use that directly:

https://koka-lang.github.io/nodec/api/group__effect.html

On GitHub:

https://github.com/koka-lang/libhandler

I believe it grew out of this prior work:

https://news.ycombinator.com/item?id=14887341


Interesting project. Koka does have great interop with C code. An async library has existed in the past, and will likely be reintegrated soon. Unfortunately that will most likely be backed by the LibUV event loop which is not WASM compatible. However, Koka's delimited continuation machinery does not require stack suspension or stack swapping at all. All it requires is a bit of thread local state. Koka builds up continuations through a series of function pointers and closure state, and only when needed. So theoretically all you need is some sort of event loop.


Also, as a Lisp programmer, I absolutely loved reading the section at https://koka-lang.github.io/koka/doc/book.html#sec-handlers - it's essentially like using the Common Lisp condition system, making me feel at home, except it's more generalized (e.g. masking individual condition handlers is not generally possible in CL) and also working in a strongly statically typed environment.


As far as I understand the PL theory involved, it's statically-typed delimited continuation, which I think is even more general than the CL condition system.


In CL, you can control the visibility of restarts. In the restart-bind construct, there is a :test-function which serves as a predicate that determines whether the restart is visible. (Unfortunately, this function is referred to as a condition).

While restarts can be visible or not, I believe there is no such mechanism for handlers. However, handlers can be effectively invisible/transparent by declining to handle a condition, which they can do simply by returning instead of perpetrating a non-local transfer.

Also, there doesn't seem to be an API in Common Lisp for calculating the handlers visible at a given point for a condition of a given type. So that means that the ability of a handler to decline a condition is pretty much as good as a visibility mechanism.


I was confused because the reference counting in the "Why Koka" part (section 2) of the book [1] seemed mismatched, so i looked it up in their reference counting TR [2]. It turns out it uses a seemingly novel approach to reference counting where any function you pass a reference to is responsible for decrementing and possibly freeing that reference. If you need to pass a reference to two functions you have to dup it once.

This makes it possible for fold to free all the Cons cells as it is mapping over it. The reuse analysis is cool, too, with in-place updates of structures that won't be referenced again.

[1] https://koka-lang.github.io/koka/doc/book.html [2] https://www.microsoft.com/en-us/research/publication/perceus... (see section 2.2)


Interestingly, this seems to be sponsored by Microsoft Research, which has also heavily supported the development of and research on Haskell.

(Source: https://www.microsoft.com/en-us/research/project/koka/)


And Lean (which also uses Perceus RC) too: https://www.microsoft.com/en-us/research/project/lean/


Did they not effectively acqui-hire or plain hire the pony lang team too?


> Koka tracks the (side) effects of every function in its type, where pure and effectful computations are distinguished. The precise effect typing gives Koka rock-solid semantics backed by well-studied category theory, which makes Koka particularly easy to reason about for both humans and compilers.

Pretty comical to hear the words "easy to reason about" and "category theory" in the same sentence.

With apologies to the Haskellers, any time "category theory" is mentioned I feel myself shying away, prior experience teaching me that those words mean "you will spend the majority of your time working around the type system "; and, "we have more data types than individual bits of data that those types describe".

A little type system goes a long way, and there's such a thing as too much in my opinion.

I was initially interested because of algebraic effects in the language because I'm told they're basically the same as common lisp conditions. I really liked learning about conditions and I wish they were in more languages. I must confess I am less interested now.


Category theory is not the headline news here, it's safety without sacrificing performance by using a new technique to compile to binaries without a runtime or garbage collector, with potentially better deallocation performance than Rust, and functional-looking code like `array.map(func)` which performs as fast as a for-loop thanks to advanced analysis.


So this means we have benefits of Rust but in a nicer, functional syntax?


Yes, essentially. Theoretically it could outperform Rust since it has aggressive local reuse of to-be-freed memory. Several papers have shown that it gets close to or outperforms C in some benchmarks. Whether this holds in practice is up for debate, but if you don't have to annotate lifetimes and don't have to manually manage memory, are you really going to care if you are 5% slower than C? There is a lot of performance optimizations still available for Koka to target as well. So far they've just taken the lowest hanging fruit (memory management).


How does it achieve better deallocation performance?


They have created a new precise reference-counting algorithm, called Perceus: https://www.microsoft.com/en-us/research/uploads/prod/2020/1...

Its innovation (they claim) is that it deallocates objects immediately as soon as it can prove that the object is not referenced any more. Not at the end of a lexical scope, like C++ smart pointers and Rust, but immediately after the last use of the object.


Interestingly, in Rust, we thought about doing this. "early drop" was the term folks talked about https://github.com/rust-lang/rfcs/pull/239

One big concern was that there's a lot of pre-existing unsafe code that could rely on the drop being at the end of the scope, and not earlier. And you could imagine where, in a world with early drop, maybe you write some unsafe that's sound, but then the safe code changes, something gets dropped earlier, and now it's not sound anymore.

This doesn't mean that this affects Koka, I haven't spent any time with it yet and so I don't mean to imply this is the wrong decision, just showing some related work.


thanks for this, it wasn't immediately obvious to me why the headline example was traversing a list but doing that fast without GC is cool


> A little type system goes a long way, and there's such a thing as too much in my opinion.

Agreed.

The purpose of "types" is to make the program easier to understand, because the invariants expressed as type-declarations hold at all times of the program execution. That makes it EASIER to reason about the program, in other words makes it easier to understand your program and what it is doing, by understanding what it cannot be doing, meaning violating its type-constraints.

But now IF the type-declarations-language becomes highly advanced and thus complex and difficult to understand, that potentially makes your program more DIFFICULT to understand.

So it's good to keep the purpose of type-systems in mind while thinking about their benefits. We declare types only so that we and others can better understand what our programs are doing.

Of course if the program is small, it is typically easy to understand and may be even easier to understand without them.


> So it's good to keep the purpose of type-systems in mind while thinking about their benefits. We declare types only so that we and others can better understand what our programs are doing.

I don't agree with this. Part of the benefit of static types is to make it easier for programmers to reason about programs. But part is also to make it easier for compilers to reason about programs. By encoding more invariants in the type system it is possible to turn a larger class of bugs into compiler errors.


I would even go as far as arguing that in languages with manual memory management like C, types are primarily a means to tell the compiler how to treat & manipulate a given piece of memory. Type safety and making it easier for us to reason about code can be a side effect of that but it's not a given.


You have to distinguish between "understanding" and "reasoning". Understanding means "understanding what ... does", where as reasoning (in "functional" jargon) means (more like) "proving what ... does".

While it's quite easy to understand what e.g. `c = a + b;` in C does, reasoning about it is comparably hard, as there can be UB (overflow of a signed integer type) involved.

In the extreme, when you have an actual proof (in a dependently typed language), you only have to understand it, but no need to reason about it as the proof is already there ;)

[insert some dependently typed vector example here :D]


Algebraic effects are not basically the same as Common Lisp conditions.


I hadn't heard of Common Lisp Conditions before reading this thread, but I looked them up in response to it, and I have to ask -- what is the fundamental difference?

Granted Common Lisp is not statically typed and the type system support for effects is a major part of the utility of algebraic effects in Koka, but otherwise they do look remarkably similar to me, both offering a form of delimited continuation that allows you to write code that can call out to a non-local handler that is set up the call stack, and which can itself return control to the caller.


> what is the fundamental difference

CL's condition system is one of the (possible) applications of an algebraic effect system (implemented using delimited continuations). Schemes have dynamically typed (delimited) continuations, for example.


But going in the opposite direction, could you not also implement arbitrary effects using conditions? (Or delimited continuations in schemes)


I am not sure about conditions (I don't know enough about CL), but the effect system of Koka (and OCaml) uses delimited continuations. So it is doable in Scheme, with shift/reset or prompt/control or whatever primitives are defined.


You don't need to know category theory to use a statically typed functional language. It just happens that people interested in research languages also happens to be mathematically inclined. You also don't need to be a type astronaut to use Haskell, you can write perfectly simple and pragmatic code in it.

I mean people cope with the horrible complexity of TypeScript just fine and that system doesn't even give you the benefit of being sound.


> that system doesn't even give you the benefit of being sound

I have hated parts of the TS type system with passion but this is the first time I'm hearing about unsoundness. Would you mind elaborating?


https://www.typescriptlang.org/docs/handbook/type-compatibil...

For example: https://www.typescriptlang.org/docs/handbook/type-compatibil...

https://www.typescriptlang.org/docs/handbook/type-compatibil...

Or the type of an array and its elements:

    const arr :  number[] = [1]
but

   const this_is_actually_undefined : number = arr[456]
so it should be

    const correct_type : number | undefined = arr[idx]
And there are way more, I'm too lazy to search for them or think about them.


Ah, yes, I've stumbled about plenty of such cases – TIL that "unsoundness" is an established term for them.

Thanks so much!


For those who've used effectful languages, how much of a spaghetti code mess do they create, potentially? That's basically my only worry with effects that can be suspended and resumed at basically any point and any location in the codebase.


Effects can pile up for sure, but with Koka's effect polymorphism it is actually quite a bit less than I first expected. (I've been contributing to Koka recently, and writing a lot of Koka code). Type aliases for effects that are commonly used together also help with typing. With the strong typing discipline and good type inference it is actually quite easy to make your code a lot cleaner than other languages in my opinion. You don't have as much boilerplate such as those introduced by monad transformers in Haskell, and since everything is purely functional it is a lot easier to reason about than other languages.


Preemptive multitasking is an effect handler implemented by the operating system. There are very few instances where you have to worry about process switching by the OS, basically only when you have a handle to a resource outside of your process.

A similar constraint will apply to algebraic effects, which is why they're algebraic: only if your code directly depends on some effect will you have to care about effects.


I don't quite understand, let's say I have a bunch of effects in my code, all depending on one another. Will that make a mess? If so, how does one avoid that, or is that an inherent issue in algebraic effects systems?


> I don't quite understand, let's say I have a bunch of effects in my code, all depending on one another. Will that make a mess?

It depends entirely on what those effects and dependencies actually are. If you really want to, you can use an effect system as a dynamically scoped imperative language, just like you can write all your Haskell code in `IO` or use exceptions for control-flow in C++. The value proposition is that you have to do it on purpose.


I've used Rx style libraries in static languages but I've found that it created a mess over time, even with their intended idiomatic usage, so I was just wondering if it were the same for effects, even if we use them in static languages how effects would be intended to be used there.


> even if we use them in static languages how effects would be intended to be used there.

There's not much that can be said about how effects, in general, are intended to be used. Algebraic effects subsume basically every control flow construct in common use: exceptions, transactional semantics, cooperative multitasking, search, probabilistic programming, IO... non-delimited continuations are the only exception I can think of. Unless you only ever write provably-total pure functions, you're always using effects of some sort. And even then, you might want to think of your program as effectful even when you don't have to: reading `a -> Maybe b` as "`a -> b` with the possibility of failure" is a classic example.


Thanks, I think I'll have to actually do some coding with effects (as I haven't really used them) to understand exactly how they work and their pros and cons. Any thoughts on which language to start off with, is Koka good as a beginner effects language, or is another one like OCaml 5 recommended?


I wouldn't worry about language-level support at the moment: there aren't as far as I know any mature implementations yet, and even if there were libraries are usually better for learning purposes.

Personally I would recommend Haskell. Native row types are coming "eventually" and always will be, so there's some ceremony involved, but it's the ecosystem with by far the best alternative solutions to the problems algebraic effects are supposed to deal with: idiomatic javascript can make any programming discipline look good, whereas it'll be much more illuminating to compare algebraic effects to a more traditional `mtl`-style approach.


Your question just isn't answerable. It's like asking if lambdas create a mess over time because you can now write in continuation passing style, or if having mutation in a language creates a mess over time because you can create globally mutable variables. How would you answer those questions?


Well, some are messier than others, of course. If I use gotos all over my code, it'll be messier than writing everything with functions, much less pure functions.


You mean a situation where the effect handler for effect A invokes effect B? I wonder if the compiler is able to detect and prevent "circular" event invocations.


Dot Selection looks neat, no pipeline operator needed. Enables extension functions a la Kotlin too.


Also known as Uniform Function Call Syntax (UFCS).


Such a thought provoking language. Docs are a good read, and reach a standard that few production languages come close to.


Agreed, Koka has great documentation, although I will acknowledge that it is incomplete. The papers are suprisingly approachable with good user applicable examples and motivation before they get too deep into type theory etc. I recommend reading a few of the more recent ones, which have mind blowing implications.

https://www.microsoft.com/en-us/research/publication/fiptree... https://www.microsoft.com/en-us/research/publication/tail-re... https://www.microsoft.com/en-us/research/publication/fp2-ful...


Amazing stuff -- thank you for sharing these papers.

The implications for things like data science frameworks and spreadsheets are particularly interesting. Perhaps we are within reach of a much more equational approach to programming with high performance.


What I don't like is that the effect handler is decoupled from the effect generator.

In the code snippet, they define a "yield", and somehow this magically works with the call after that of `traverse`. What happens when you are deeper in a call stack and you accidentally or on purpose have to redefine yield? What if you want to use a generator in a generator?

Also, the `fun yield( x : a ) : ()` syntax is highly unintuitive. It _takes_ an int and returns empty? So it's just a continuation? And magically it transfers control flow to the correct place? How does it know where that is?

You also have to define the name "yield" thrice, once as the "effect declaration", and then twice as a function, once before the call to traverse, and once inside the effect declaration.

These are questions you don't even need to worry about when the handler is a continuation (function) that you pass in as an argument.

There's also a problem that koka seems to ignore, that for resumable coroutines of execution, you really want 2 kinds of resumptions. One that overwrites the `ret` address of the coroutine, and one that does not. The former is useful for generators like `traverse`, where you want to return to just after the latest call site, while the latter is useful for things like `throw`, where you always want to return to the first call site.

I don't know guys. These "algebraic" effects just seem like really bad syntax sugar over delimited continuations.


> In the code snippet, they define a "yield", and somehow this magically works

With `with` you explicitely set the effect handler, no magic involved.

> Also, the `fun yield( x : a ) : ()` syntax is highly unintuitive. It _takes_ an int and returns empty? So it's just a continuation?

A function that does not return anything is (the poster child of) an impure function, that does nothing else but an (side) effect. That's why you need to call it as an effect and not a "normal" (pure) function.

> And magically it transfers control flow to the correct place? How does it know where that is?

It doesn't have to. The hidden `run` function (or whatever you want to call it) of the effect systém that actually calls the right effect handler has to know about that.

The run function does something like (simplified)

    - see, that a effect handler with ID 'yield' is called
    - call the (currently registered) handler for 'yield' with the given arguments
    - call the continuation with the result of the handler


The `run` function is actually quite optimized in Koka. At the point of the call to an effect function there is a O(1) lookup, due to this paper (https://www.microsoft.com/en-us/research/publication/general...).

After that depends on whether a continuation actually needs to be captured or not. Many useful effects can be implemented as `val` or `fun` clauses which can be called / accessed in place without any unwinding of the stack. Only in cases where a continuation is required is the continuation built up.


Thanks for the link! Koka (and Lean too) has produced quite some interesting papers.


The first yield is just the effect name. There is a shorthand syntax for effects with only one function, such that you only define the name once:

effect ctl yield( i : int ) : bool

See bottom of this section https://koka-lang.github.io/koka/doc/book.html#sec-handling.

When handling the effect you don't 'define' the name, you implement it. Of course you have to mention the name then, just like e.g. with interface implementations


I'm not 100% sure what you mean with your second point. There are several kinds of handler types: `fun`, `ctl`, `final ctl`, `raw ctl`. The implementor of the handlers is restricted to what these handler types allow. You can implement both generator traversal and exception throwing in this system.


What are the practical advantages of Koka over say “IO a” or “Async<‘t>” in other languages?


Monads don’t compose, effects do. ‘IO a’ works great until until you need to add another effect, for example ‘Maybe’. Then you need to bring in monad transformers and create your own monad combining the two, then rewrite all your code to lift the effects from one monad to the other. And you have to do this every time you want to add a new effect.


Not to mention you need monadic and nonmonadic versions of every higher order function (or so it feels like) - map / mapM, fold / foldM, etc.

This is even worse in Rust, which requires you to have a separate implementation for every effect as well (since it doesn't have higher kinded types)


> Not to mention you need monadic and nonmonadic versions of every higher order function (or so it feels like) - map / mapM

This is more a weakness of Haskell's standard library (which is despite its reputation not very friendly to category theory) than an inherent problem with monads. A more general `map` would look something like this

    class Category c => FunctorOf c f where
    map :: c a b -> c (f a) (f b)

    fmap :: FunctorOf (->) f => (a -> b) -> f a -> f b
    fmap = map

    type Kleisli m a b = a -> m b
    mapM :: (Monad m, FunctorOf (Kleisli m) f) => (a -> m b) -> f a -> m (f b)
    mapM = map

    type Op c a b = c b a
    contramap :: FunctorOf (Op (->)) f => (a -> b) -> f b -> f a
    contramap = map

    type Product c d (a,x) (b,y) = (c a b, d x y)
    bimap :: FunctorOf (Product (->) (->)) f => (a -> b, x -> y) -> f (a, x) -> f (b, y)
    bimap = map

    -- and so on 
but this would require somewhat better type-level programming support for the ergonomics to work out.


Monads need to wrap each other, effects are more composable


> Monads need to wrap each other, effects are more composable

It's really trickier than algebraic effects make it seem though. Haskell-ish "monad transfomers" as a stack of wrappers may pick concrete ordering of effects in advance (e.g. there's difference between `State<S, Result<E, T>>` and `Result<E, State<S, T>>`, using Rust syntax), but effect systems like one in Koka either have to do the same decision by using specific order of interpreters, or by sticking to single possible ordering, e.g. using one, more powerful monad. And then there're questions around higher order effects - that is, effects with operations that take effectful arguments - because they have to be able to "weave" other effects through themselves while preserving their behaviour, and this weaving seems to be dependent on concrete choice of effects, thus not being easily composable. In a sense, languages like Koka or Unison have to be restricted in some way, giving up on some types of effects. I'm not saying that's a bad thing though, it's still a improvement over having single effect (IO) or no effects at all.


Being able to change the ordering of effects on the fly is a benefit of algebraic-effect systems. As you mentioned `State<S, Result<E, T>>` and `Result<E, State<S, T>>`have very different effects. Algebraic-effects let you switch between the two behaviors when you run the effects, whereas with monad transformers you have to refactor all your code to use `State<S, Result<E, T>>` instead of `Result<E, State<S, T>>` or vice-versa


You can recover the ability to reorder effects by using MTL-style type classes, so you could write that as M<T> where M: MonadState<S> + MonadError<E>, in rust-ish syntax. But that makes the number of trait/typeclass implementation for each transformer explode (given a trait and a type for each transformer, it's O(N^2)), whereas algebraic effect systems don't really have that issue. I also have a hunch that algebraic effects(or, well, delimited continuations in general) are probably easier to optimize than monad transformers, too.


It's much easier to adapt existing code. There's no need to rewrite code to use monadic binds.


Does anyone here have any experience with Koka? I've been following the language from afar for a while but I have yet to read an account of what it's like to use it in practice.


I think that it is poised for rapid growth this year. I've been using it quite a bit, and once it gains a more comprehensive standard library, I think it will be much more useful in practice. Given the activity on GitHub, I would suggest checking back in a few months for a more out-of-the-box experience (it will still be a new language - but more ready to entertain serious projects). However, I would give it a try now if it looks interesting to you, install the VScode extension, look through some samples and leave some feedback on github.

As a side note, it has pretty great C interop, so you should be able to use C libraries or your own C code, for missing functionality. Documentation on this feature is unfortunately lacking right now though.


I've tried using it a bit. It works and it is fun to use, but there are quite a number of rough edges. I wouldn't use it in production yet, I don't think they are conservative there.


Last time I tried to use it (which was quite long ago), it barely had a print function, making it pretty much unusable.


It does have `print(ln)` for some time (when I have tried it for the first time) now.


It is self-proclaimed as a "research Language" on the homepage, which has kept me away so far.


I share this sentiment, although I have wondered if they're being overly conservative with this statement.


I would say that research language is an appropriate description currently. For example it has very little standard library. Though I would say that it is well positioned to take off in the next year with several important improvements coming soon.


Microsoft really love those angle brackets don't they?


What is the equivalent of the Haskell StateT monad in Koka?


I see now, they have mutable variables, within a lexical scope anyway.


does this support WASM?


yes, use `--target=wasm`


My understanding is that wasm doesn't allow for stack switching yet. How does koka implements effects in wasm? Heap allocated stack frames?


Koka currently uses the same approach that it does for compiling to C. It essentially does a very efficient low level monadic translation of a delimited control monad. So basically it all just ends up as a tiny bit of thread local state, and building up continuations on returns from functions if they are yielding.

See this paper for details, it gives a good high level overview before diving into the nitty gritty: https://www.microsoft.com/en-us/research/publication/general...


Thanks, I'm quite interested in this, so I'll certainly give the paper a read!



Thanks! Will take a look. The question is if course how emscripten implements longjmp.



The talk in office restrooms about using Koka gets a new twist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: