The title is a little inflammatory. The critique is specifically about Ocaml’s handling of let-bindings. AFAICT OP thinks the syntax sucks because:
1. there’s no marker to indicate the end of let scopes
2. functions are bound with the same syntax as constants
He asserts that this is confusing. In practice - for the many issues I have with Ocaml! - neither of these are actual issues, in my experience, once code formatting is applied.
An actual serious problem with Ocaml’s syntax is that matches don’t have a terminator, leading people to mess up nested matches frequently. Pair that with the parser’s poor error reporting/recovery and things can become unpleasant quickly.
ReScript is better described as a descendant (and fork) of ReasonML aimed to fit into the JS ecosystem. In contrast to js_of_ocaml, ReScript prioritizes interoperability with existing JS code and APIs over interop with existing OCaml code, whereas js_of_ocaml takes the opposite approach. So people looking for an improved version of JavaScript or TypeScript should probably choose ReScript, but people who are porting an existing OCaml program might prefer js_of_ocaml.
Worth nothing that the former lead dev of ReScript left to create a WASM-first language call Moonbit (https://www.moonbitlang.com/). The language design is awesome and corrects a lot of the corners that they just couldn't get around in OCAML. A touch early, but really well done.
I love the idea of Reason/ReScript. I hope they can figure out a way to work together and join forces somehow, the contributions to both repos seems to have faded over the last years, but maybe that's because the projects have stabilized, I don't know.
I've had lots of fun playing with Reason a few years ago. I created an interactive real-time visualization tool, to see a Regexp transform into a NFA to DFA to minimal DFA graph: http://compiler.org/reason-re-nfa/src/index.html
It only works for basic regexes though.
I can’t speak for Reason but the ReScript project and community is very alive and vibrant. There’s been some major improvements over the past year and overall it’s much more appealing and mature now compared with only a few years ago. We’ve been using it in a fairly large React app for a while and the experience has been very good.
It's not more reasonable. It explicitely designed to look like JS which is, well, bad. Some of the choices are extremely annoying and overall useless like reintroducing parentheses for function arguments.
There is a reason - pun intended - it didn't take off at all with the community.
I was coming to say exactly this. I use OCaml a lot and never had a problem with lets. On the other hand, nested match with where you must either use parentheses or begin…end can be confusing for beginners and stays annoying forever.
I think you’re being generous. The example the author gave is awful because any language can be made illegible if you cram in complicated expressions with multiple levels of nesting into a single line. I’d say it’s outright flamebait.
n-arity functions are also constants. 0 is not a special case there. Their behaviour won't change depending on the context from which you call them. A large part of FP is that functions _are_ values.
Xah Lee is not exactly known for moderation. If you’re not familiar, browsing the site may be worth your while. It’s full of inflammatory and sometimes bizarre takes, but I’d be hard pressed to argue that it’s ill informed.
> If you’re not familiar, browsing the site may be worth your while
Not inclined to do this though, his site doesn't exactly seem easy to browse either. Maybe a reflection of his bizarre takes?
Do you have some examples of well-informed articles by him? I found one about "syntax vs semantics" that seems as inflammatory and ill-informed as TFA under discussion here. I'm not inclined to give this person much leniency...
> An actual serious problem with Ocaml’s syntax is that matches don’t have a terminator, leading people to mess up nested matches frequently. Pair that with the parser’s poor error reporting/recovery and things can become unpleasant quickly.
Yes, it's clearly one of the syntax weak point. You can surround them with parentheses or begin ... end to make things explicit but the default is definitely error prone.
i agree an autoformatter alleviates the let decl/expr in practice, especially for an experienced user; an autoformatter also fixes nested matches too ime.
however, my university has a mandatory class taught in ocaml, which i've ta'd for a few times; this is the _number one_ "the undergrad ta couldn't figure out my syntax error" issue students have
Totally agree. In particular when I read #2 I was really scratching my head. It's a functional language - the thing on the left of the equals sign is a pattern. Apart from the argument you pass, patterns for functions look similar to patterns for constants precisely because in a functional language everything looks like a function. And everything looks like a function because just about everything is a function.
The match terminator end thing made me sad when I first saw this in Ocaml. So many languages (C, bourne shell, etc etc) have this exact same problem and it completely sucks in all of them. It's more debilitating in a functional language specifically because matches are more useful than say C case statements so you want to use them much more extensively.
I frequently want to do a pattern match to unpack something and then a further pattern match to unpack further - a nested match is a very intuitive thing to want. Yes you can normally unnest such a match into more complicated matches at the parent level but this is usually much harder for humans to understand.
...and if you had a marker for ending match scopes you could always just reuse that to end let scopes as well if you wanted to although I've literally never a single time run into that as a practical problem (although I haven't written that much OCaml you'd think if it was a real issue I would have banged into it at least once because I found a fair few sharp edges in my time with the language).
It does (so you have esac to end a case statement in this case) but depending on what combination of line noise (;& vs ;;) you use, you have case fallthrough like c. You have to sort of want to do it (ie ;& isn't very common) but it does happen.
> leading people to mess up nested matches frequently.
Very often this leads to typing errors. Otherwise it's caught by autoformat (which everyone should be using). But even without that, this is one pitfall every OCaml developer is aware of.
I'm not arguing the syntax is perfect, but I wouldn't say it has a serious problem because of that. Never seen that being a problem.
Keyboards are also extremely error-prone. I can't count the times where I mistyped because of that. After each keystroke, people should type a "confirm" character in order to make sure of what they're doing.
The more I used ocaml the more I found beauty in the syntax. It’s very ergonomic in many ways:
1. It’s whitespace insensitive, which means I can code something up really messy and the code formatted will automatically fix it up for me.
2. In general there aren’t a ton of punctuation characters that are very common, which is great for typing ergonomics. Don’t get me wrong, there are still a lot of symbols, but I feel compared to some languages such as Rust, they’re used a lot less.
Beyond the syntax, there are a couple of things I really like about the language itself:
1. Due to the way the language is scoped, whenever you encounter a variable you don’t recognize, you simply have to search in the up direction to find its definition, unless it’s explicitly marked as “rec”. This is helpful if you’re browsing code without any IDE tooling, there’s less guessing involved in finding where things are defined. Downside: if the “open” keyword is used to put all of a module’s values in scope, you’re usually gonna have a bad time.
2. The core language is very simple; in general there are three kinds of things that matter: values, types, and modules. All values have a type, and all values and types are defined in modules.
3. It’s very easy to nest let bindings in order to help localize the scope of intermediate values.
4. It has a very fast compiler with separate compilation. The dev cycle is usually very tight (oftentimes practically instantaneous).
5. Most of the language encourages good practice through sane defaults, but accessing escape hatches to do “dirty” things is very easy to do.
6. The compiler has some restrictions which may seem arcane, such as the value restriction and weak type variables, but they are valuable in preventing you from shooting yourself in the foot, and they enable some other useful features of the language such as local mutation.
2. In general there aren’t a ton of punctuation characters that are very common, which is great for typing ergonomics. Don’t get me wrong, there are still a lot of symbols, but I feel compared to some languages such as Rust, they’re used a lot less.
I never really seen someone put that into words. I always feel a certain kind of weird when I look at a language with tons of punctuation (Typescript is good example).
I think typing ergonomics are overlooked as well. CSS has the most verbose syntax for variables I've had to use regularly e.g. `var(--primary-color)` which I find unpleasant to type when experimenting. And I actually like the lack of brackets and commas in OCaml for function e.g. you write `add_numbers 1 2` instead of `add_numbers(1, 2)`. Brackets and commas in particular require you to navigate left/right a lot to add them in the right place while iterating and give confusing errors when you get them wrong.
Would be curious if there's work into a programming language that was optimized for minimal typing while iterating.
Tangentially there is room for optimizing for non-american keyboards.
The accent grave (backtick) you're using in a Markdown-inspired way is utterly annoying to type on keyboards where accents are implemented as dead keys, to be combined with vowels, common on european keyboard layouts. For your example I had to type `, but then look ahead to the a of add_numbers and remember to put in an extra space, so that the ` and the "a" don't combine to an "à".
Also I find it somewhat illogical: The usage of an accent as a character in itself in programming languages is one of my pet peeves. Just being an ASCII character is not reason good enough to keep using it.
Curly or square brackets, backslashes and other stuff also put you into uncomfortable AltGr or Cmd-Shift territory on some keyboards. American language designers are often blind for these simple ergonomics.
> The accent grave (backtick) you're using in a Markdown-inspired way is utterly annoying to type on keyboards where accents are implemented as dead keys, to be combined with vowels, common on european keyboard layouts. … Also I find it somewhat illogical: The usage of an accent as a character in itself in programming languages is one of my pet peeves. Just being an ASCII character is not reason good enough to keep using it.
I understand that keyboards behave as they behave, and saying that "well they shouldn't" isn't a real answer. But I also think that it's not true that the backtick character is an accent grave, even if it is treated like one, any more than any other pair of Unicode homoglyphs are the same.
> Brackets and commas in particular require you to navigate left/right a lot to add them in the right place while iterating and give confusing errors when you get them wrong.
In other languages, modern IDEs take care of that. For an invocation like `addNumbers(1, 2)`, you would perform code completion after typing “aN”, and the IDE would already fill in matching function arguments from the current scope. The selection (cursor) would be highlighting the first argument, so you can readily replace it. Pressing Tab would move to the next argument. Pressing Enter (or similar) would accept the call expression in the current state and move the cursor behind it. So you only type the actual arguments (if necessary) and otherwise just hit combinations of Tab and Enter, or Ctrl+Space for code completion. You generally don’t type the parentheses or commas yourself.
That being said, I’m in favor of languages with not too much punctuation. But there’s a balance, and too little punctuation can also hurt readability.
I'd be interested in a link about this (can't find one from a quick look), but guessing it's to do with avoiding clashes with names in existing tools/code and backwards compatibility in browsers. In Sass, you would just write $primary-color, but if they copied that syntax exactly, it's likely going to get confusing for Sass tools if it's a Sass variable or a CSS variable.
I feel the opposite—the typing ergonomics are better with a lower-punctuation language, but the reading ergonomics are substantially worse.
Punctuation is used in written human languages to provide assistance to our brain-parsers—like road signs that help navigate a sentence. Too much punctuation and it becomes a huge problem because it ceases to be meaningful, but have you ever tried reading a Roman inscription written with no spaces or sentence boundaries?
I think programming language punctuation serves the same role—it visually marks out how the parser is going to navigate the code, allowing the reader to use less mental effort repursing code that they initially misparsed. ML-style languages have a simpler syntax than the C family, which means there is some justification for having less assistive punctuation, but I've definitely struggled to navigate OCaml as a result of too little punctuation.
I believe OCaml's syntax does suck, but I don’t think this article gives a compelling argument as to why. Missing an `in` turns a let expression into a top level binding and kicks a syntax error often a long way down the file, making it very hard to identify the cause. The relative precedence of let, if, match, fun, and the semicolon is unintuitive and hard to remember, making me want to add loads of unnecessary and ugly parentheses.
On the other hand, I like that there's little overloaded syntax, and the meaning of different characters is fairly consistent.
If anything, OCaml's error messages are something that sucks, especially for newcomers. The `Error: Syntax Error` message that points to an empty last line in the file leaves you doing the parser work, in a language that you don't understand.
Depends on what you compare it with, I guess. (I worked in OCaml and Haskell and other 'weird' languages professionally in different jobs for many years.)
Although I'm not a Microsoft fan, their docs for F# are worlds ahead of anything I've seen for OCaml (I know a few OCaml folks are working to fix this).
I've considered trying it for Advent of Code this year.
Curious to know more, do you think can make a long-form of "why" and a detailed experience? If you have some time, would be massive to compare it with what's the same experience today
I also worked on an ocaml codebase for a roughly similar time frame and found it extremely pleasant. Probably the best time I spent as a dev with working on an Ada project with a great architecture.
What were the main issues you came across? What language would you prefer if you had to start from scratch and had the choice to go with anything else?
Other commenters have already expressed their opinion about the shallowness and inadequatenessof the article, so I will touch on a different, more technical point.
The author doesn't understand why OCaml (and Haskell, and many HM-derived languages) have let ... in ... at all, as opposed to having, say, just lambdas. That's because of a feature of Hindley–Milner type systems called let-polymorphism[1].
This is hard to see in OCaml if you are not intimately accustomed with HM type systems because definitions always use let, but in Haskell it's immediately clear that something more interesting is happening because regular definitions do not use let.
F#, which has been heavily inspired[2] by OCaml got rid of the in, rather it uses indentation to do automatic in insertion[2]. OCaml does not want to be indentation-aware.
Weak arguments in the article with badly chosen examples.
If one wanted to criticize OCaml syntax, the need for .mli-files (with different syntax for function signatures) and the rather clunky module/signature syntax would be better candidates.
I actually rather like the mli-files. It's a nice file to read, with the documentation and externally available symbols only. However, the fact that the syntax is so different is a bit annoying.
Sometimes I wrote (haven't written OCaml for some time now..) functions like:
The problem with mli files and counter-argument to “they’re good documentation” is that, with a few visibility annotations (`pub let`, etc.) they could be auto-generated. Then you don’t have to write your entire public interface twice (sans the inferred `let` types, but I already prefer to make those explicit in the `ml` because otherwise you get much more verbose type errors when they don’t align with the `mli`).
I actully like the mli files. Its a separate place to describe the PUBLIC API, and a good place for documentarion. Now you dont clutter your code with lots of long comments and docstrings.
This is such a weird complaint that it feels like engagement bait.
"Let x be equal to 4 in x * x" is a natural way to express a concept in English.
The equivalent OCaml code is:
let x = 4 in
x * x
I'm not sure how that qualifies as "sucks"?
In fact, the `in` makes OCaml's scoping rules some of the easiest to follow. This is valid OCaml code:
let x = 4 in
let y = 3 in
let z = 2 in
x * y * z
let x = 5 in
let y = 6 in
let z = 7 in
x * y * z
The phrasing of "let x = 4 in <this expression>" makes it explicit where x is defined as 4, vs where it's instead (in a different expression) defined as 5.
I have an easier time understanding scope in OCaml, which explicitly spells out its scoping rules in plain-English syntax, than I do in Rust or JavaScript, which use the deceptively-simple-looking braces but also wild concepts like lifetimes or hoisting.
I've written a lot of OCaml and I always thought `let .. and .. in` indicated that the definitions were mutually recursive and that you could refer to y in x, like in `let rec .. and` or `type .. and`! That's surprising.
I never thought about it much before but after seeing let+ and and+ 'debut' as operators for applicative functors (ie parallel application) it sort of clicked for me then.
I tried out OCaml last year for solving the AOC puzzles. I managed to solve all of them in earlier years using Haskell or Scala, but this was the first time I gave up. It was not as much the syntax, it was the schism between the OCaml standard libraries and those from Jane Street that turned me off. Whatever I was searching for on the Internet, resulted in a mix of links to either of those libraries. It was confusing and very frustrating.
I get that. Having read Real World OCaml back when it was an O'Reilly book, I had a hard time not thinking Core was the defacto OCaml std lib. And its horrible documentation made me think the OCaml community was dead - or at least brain-dead. It appealed to one type of learner: Someone who just needs to see type signatures and has no interest in examples of how it could be used.
After reading OCaml From The Very Beginning my stance softened. Learning PURE OCaml is a must. (if you want to do AoC with it). You may find yourself bringing in Core or Base, after you've got a good idea of a solution.
It's definitely a fun language for doing AoC... but it can become extraordinarily frustrating when you hit a wall and all the StackOverflow answers assume Core.
Yup the thing that people recommending Jane Street libraries to newcomers don't realize is that they are made by and for a well-established team of OCaml developers who have been extensively trained in how to use it in Jane Street. Outside of the warm protective bubble of Jane Street's tribal knowledge, it's a more difficult task to pick up and understand their libraries with their often minimalistic documentation.
I've recently started a mini-project in OCaml, having some Haskell background.
The worst thing so far is that you have to declare and define functions before using them. This results in unimportant utility functions being at the top, and the most important functions being at the bottom of a file.
I haven't had much issue with let bindings, however that might be because my functions are fairly simple for now.
> The worst thing so far is that you have to declare and define functions before using them.
That's an extremely good thing.
Unordered definitions and where are some of Haskell worst decisions with lazy by default, overuse of operators and a community which sadly think point-free is desirable.
That's the kind of choices which makes Ocaml a lot more readable.
I guess it's a matter of taste, for me reading the generics and big picture first is more important than the details. Maybe it's my programming skills, but my files start with some five or ten string manipulation utilities and helpers. I honestly do not even care about their implementation, a name would suffice. Which is why I also like `where` approach much more than `let ... in`: it allows you to read the general idea, postponing the details to later (if you want to go into them).
However, I agree on laziness, operators and point-free.
This is generally a pro for me. I can skip to the bottom of a file and see what the main purpose of it is, knowing that it's the only place that could reference everything else in the file.
It's a strange take that a possibility to write unformatted code means syntax sucks.
I've read other articles and there is other weird stuff. Like Java has perfect syntax because on one line you could do so less. In the mean time modern "functional" (or "monadic") style java is a chained mess with ridiculously long lines.
> In the mean time modern "functional" (or "monadic") style java is a chained mess with ridiculously long lines.
I have seen many devs that prefer the Java imperative style over more functional chained style. Could be an outcome of leetcode style interviews or that most CS programs start with C/python/Java style languages but it's not uncommon to see that preference especially among junior devs
The funny thing real functional languages doesn't use dotted chains. It's quite a stretch to try to make something functional without adequate syntax support. Java streams really make things a more difficult than it should be.
idk mate, his guide to pick prostitutes has more compelling argument compared to this article. i mean, almost all the time ocaml users don't write their stuff "nested".
I want to like OCaml, but the tooling isn't great and async operations require a library to work for some reason. I tried f# but if you want to do async operations there, you have to do them in these even weirder "computation blocks" with this annoying ! Syntax. I've found that the best way to write ML family programs is to let an imperative language handle IO and then write any more mathematically or logically complicated work in ML, but only after you've loaded all of your data
I do agree that it's good language design, if you can deliver what would be core functionality via a library.
Whether you want to integrate that library into the standard library or not is an independent question of culture and convenience.
(Eg Python does quite well with its batteries-included approach, but if they had better dependency management, using third party libraries wouldn't be so bad. It works well in eg Rust and Haskell.)
As the other commenter pointed out, this isn't restricted to strongly-typed functional languages.
Clojure has core.async, which implements "goroutines" without any special support from the language. In fact, the `go` macro[1] is a compiler in disguise: transforming code into SSA form then constructing a state machine to deal with the "yield" points of async code. [2]
core.async runs on both Clojure and ClojureScript (i.e. both JVM and JavaScript). So in some sense, ClojureScript had something like Golang's concurrency well before ES6 was published.
That's wildly overselling it. Closure core async was completely incapable of doing the one extremely important innovation that made goroutines powerful: blocking.
No, blocking refers to calling a function that blocks. Core.async can't handle that because macros are actually not capable of handling that, you need support from the runtime.
Call a function that blocks in go, the routine will park. Do that in Clojure and the whole thing stalls.
Assuming "function that blocks" means "the carrier thread must wait for the function to return" and "the whole thing" means the carrier thread, then core.async doesn't really have this issue as long as e.g. a virtual thread executor is used.
There is a caveat where Java code using `synchronized` will pin a carrier thread, but this has been addressed in recent versions of Java.[1]
The post I was replying to included explicit mention of ClojureScript, where this does not exist. As it did not for Java for most of core.async's existence. And of course, for virtual threads, that's very much "special support from the language"!
Because JavaScript runs an event loop on a single thread. It's akin to using GOMAXPROCS=1 or -Dclojure.core.async.pool-size=1 . There may be semantic differences depending on the JavaScript engine, but in the case of web browsers, the only function that could possibly stall the event loop is window.alert. As for Node.js, one would have to intentionally use synchronous methods (e.g. readFileSync) instead of the default methods, which use non-blocking I/O.
When using core.async in ClojureScript, one could use the <p! macro to "await" a promise. So there is no use for Node.js synchronous methods when using core.async (or using standard JavaScript with async/await).
I would call this "making use of the platform" rather than "special support from the language". The core.async library does not patch or extend the base language, and the base language does not patch or extend the platform it runs on.
Some F# articles are outdated - they predate F# 6 which added task { } CE which simplifies asynchronous code that interacts with .NET's standard library.
I found F# to be more pleasant to work with async than C# (which is already a breeze). It is true that you still have to define 'task' (or 'async' if you want to use Async CEs) but it is generally there for a reason. I don't think it's too much noise:
let printAfter (s: float<second>) = task {
let time = TimeSpan.FromSeconds (float s)
do! Task.Delay time
printfn $"Hello from F# after {s} seconds"
}
I dislike that there's a kind of sub-syntax specifically for async. I like how C# converts `await` into the necessary calls. In this code I think it would look better to simply have:
let async printAfter (s: float<second>) =
let time = TimeSpan.FromSeconds (float s)
await Task.Delay time
printfn $"Hello from F# after {s} seconds"
and then printAfter is called with `await` as well. I'm sure there's some FP kind of philosophy which prohibits this (code with potential side effects not being properly quarantined), but to me it just results in yet more purpose-specific syntax to have to learn for F#, which is already very heavy on the number of keywords and operators
'async'-annotated methods in C# enable 'await'ing on task-shaped types. It is bespoke and async-specific. There is nothing wrong with it but it's necessary to acknowledge this limitation.
let!, and!, return!, etc. keywords in F# are generic - you can build your own state machines/coroutines with resumable code, you can author completely custom logic with CEs. I'm not sure what led you to believe the opposite. `await Task.Delay` in C# is `do! Task.Delay` in F#. `let! response = http.SendAsync` is for asynchronous calls that return a value rather than unit.
In the same vein, seq is another CE that is more capable than iterator methods with yield return:
let values = seq {
// yield individual values
for i in 1..10 -> i
// yield a range, merged into the sequence
yield! [11..20] // note the exclamation mark
}
Adding support for this in C# would require explicit compiler changes. CEs are generic and very powerful at building execution blocks with fine control over the behavior, DSLs and more.
> It is bespoke and async-specific. There is nothing wrong with it but it's necessary to acknowledge this limitation.
I would disagree. If you need to have a bespoke set of syntax, then something is not integrated where it should be. The language design should not be such that you are writing things differently, depending on the paradigm that you're handling. That's not something that occurs in every language, so it isn't essential that it exists.
We can acknowledge the differences in a way that alerts the programmer, without forcing the programmer to switch syntaxes back and forth when moving between the paradigms. async/await is one method, Promises another, etc. A different syntax is a much, much higher cognitive load.
F#'s computation expressions are closely related to Haskell's monads + do-notation combo, CEs are both more limited than Haskell's approach to monads (from a type expressibility perspective) and more expressive than pure monads (from a modeling perspective, can model a general class of computational structures beyond monads; CE's also share F#'s syntax, with extensible semantics). This notation can be advantageous and clarifying when used in the right places. It has advantages over C#'s async from a flexibility/extensibility perspective and also provides more options in orchestrating more complex control flow across async computations. C#'s approach is more streamlined if you only care about using async according to how Tasks are designed (which still enable a quite broad scope) and don't need the flexibility for other computational patterns.
Simple things like the maybe and either monad are often clearer in this notation. Complex things like alternatives to async (such as CSP derived message passing concurrency), continuations, parser combinators, non-determinism, algebraic effects and dependency tracked incremental computations are naturally modeled with this same machinery, with CE notation being a kind of super helpful DSL builder that makes certain complex computations easier to express in a sequenced manner.
If the custom syntax was only for async you'd have a point, but the general power of the framework make it the more preferable approach by far, in my opinion.
However, most of the industry has moved away from DSLs. Whilst having a unique language can make certain things more expressive, having something standard makes mistakes happen less, and increases the effectiveness of a programmer. Lisp doesn't rule our day to day.
We shoehorn things that feel like, but are structurally different, to DSLs into config files and the like, using JSON/YAML/etc in rough ways, because DSLs introduce a cognitive overhead that doesn't need to be there.
That the shoehorn happens, does mean that DSLs are something natural to reach for. You're right there. But that we have moved away, as an industry, indicates that using any kind of DSL is a smell. That there probably is a better way to do it.
Having a core language feature using a DSL, is a smell. It could be done better.
I cannot make sense of this reply. Different languages have different syntax.
Support of asynchronous code and of its composition is central to C#, which is why it does it via async/await and Task<T> (and other Task-shaped types). Many other languages considered this important enough to adopt a similar structure to their own rendition of concurrency primitives, inspired by C# either directly or indirectly. Feel free to take issue with the designers of these languages if you have to.
F#, where async originates from, just happens to be more "powerful" as befits an FP language, where resumable code and CEs enable expressing async in a more generalized fashion. I'm not sold on idea that C# needs CEs. It already has sufficient complexity and good balance of expressiveness.
Different languages have different syntax, but most do not have a separate syntax inside themselves. A function is generally a function. They do adopt various structures - but those are structures, not syntax. I'm not sure you've understood that was my point.
Do-notation-like 'await' is not for calling functions, it is for acting on their return values - to suspend the execution flow until the task completes.
I've written patches for F#. I do know what the hell I'm talking about.
However, the compiler does not, has never, required that it does things via a different syntax. In fact, in the early branches before that was adopted, it didn't! The same behaviour was seen in those branches. This behaviour you expect, was never something that had to be. It was something chosen to simplify the needs of the optimiser, and in fact cut the size of code required in half. It was to reduce the amount of code needed to be maintained by the core team. And so 1087 [1] was accepted.
So perhaps you might need to read more into the process of the why and how async was introduced into C# and F#. It was a maintenance team problem. It was a pragmatic approach for them - not something that was the only way that this became a possibility.
As said, in the original branch for using tasks...
> Having two different but similar ways of creating asynchronous computations would add some cognitive overhead (even now there are times when I am indecisive between using async or mailbox for certain parallelism/concurrency scenarios). [0]
However, this is where our opinions differ. I like task CE (and taskSeq for that matter too). It serves as a good and performant default. It's great to be able to choose the exact behavior of asynchronous code when task CE does not fit.
I'm somewhat already aware of these considerations, it's just that when you're working in web development, a huge amount of your code is async and this means that a large part of the code is wrapped up in these computation expressions that I think are just plain ugly
You might like Scala. It has much of the good parts of OCaml or F#, but also lets you write imperative code freely when you want. The `for`/`yield` syntax for async is very nice IMO, or you can write Javascript-like promise chaining directly if you want.
Scala is great, nearly all of its constructs were right and right early. Kotlin/Java catching up before it got popular kinda nixed its growth and at this point i dont see it being chosen for new projects.
Gleams a nice one too if you want pure functional, small community though.
Some interesting things happening in the structured concurrency / "Direct style" space. It looks like it could become a powerful and readable way to compose (asyncy type) things. Simpler code, usable stack traces, better traceability, less function colouring concerns.
It's early days in that regard, with some folks doing some really interesting things: Odersky himself / the Ox project.
I 100% agree. The lack of clear scoping and function call grouping syntax just turns it into a word soup. It becomes difficult to parse for humans and I spend a stupid amount of time just getting the semicolons, begin/ends, etc. right.
It's like... when you mismatch brackets or braces in a C-style language, except to resolve the problem you can't just find the bracket that's highlighted in red and count; you have to read an essay.
I don't know why there are so many people here defending it. It's pretty clearly very elegant, but extremely inconvenient.
> Use the LSP integration of your editor which will show you where the error is as you type, so you catch the problem early
Yeah I do have this set up and it's a very good LSP, but unfortunately frequently if you get the nesting wrong the error is like "the entire rest of the file is wrong". It's often not very helpful.
That's not unique to OCaml. C style syntaxes can give "everything after this point is an error" if you get brackets wrong, but it's much easier to figure out what you did wrong.
Another thing I personally hate about OCaml's syntax is the order of generic parameters being backwards. I know why it's like that, but it still doesn't make any sense to me both as a programmer and as a non-native english speaker.
type 'a Tree =
| Node of ('a Tree, 'a Tree)
| Leaf of 'a
Is saying "The possible cases for the type describing an 'a Tree are: a Node of two other 'a subtrees, or else a Leaf of an 'a."
Then when you concrete-ize it, you have an "int Tree" or a "string Tree".
This is a little clearer with thinking of the List datatype: this is a number list, that's a string list, that's a grocery list, this is a guest list...etc.
One nice thing in F# is that there's an ability to "standardize" how you write generics to look more like the (admittedly arbitrary) way that C# or Java or C++ write them:
type Tree<'a> =
| Node of (Tree<'a>, Tree<'a>)
| Leaf of 'a
> This is a little clearer with thinking of the List datatype: this is a number list, that's a string list, that's a grocery list, this is a guest list...etc.
That's what I was referring to when I said "as a non-native english speaker". In my native language I would say those in a way that's more similar to "a list of numbers", etc etc.
> One nice thing in F# is that there's an ability to "standardize" how you write generics to look more like the (admittedly arbitrary) way that C# or Java or C++ write them:
Arguably this order (type then generic arguments) is not that arbitrary because this is effectively a type level function, so it makes sense that it uses the same order. However the angle brackets are indeed arbitrary, and IMO to be more consistent the same syntax as function application should be used. For practical purposes though this may make a type's syntax too similar to an expression, so it may not be the best choice (unless you're in a language where types are actually values)
Would you mind expanding on why do you prefer it? I'm not too focused on e.g. not using angle brackets, which is pretty common in the C family (though funnily enough they are not used in C), but rather on the order with type arguments being before the generic type, for example the type of lists of integers being `int list` rather than `list int`.
My mother toungue has the reverse order to English grammar too, but I've spoken English long enough for that to not matter. If we're talking about, say, consistency with function syntax, then to me a function is a verb, and a type is a noun, hence still consistent.
I mentioned the typing experience because I hate to have to 1. open and close braces 2. press shift while typing
[2] is also the reason I prefer generics in python and scala than that of java and C#
It doesn't. They are useful for the REPL only, to tell it that a statement is over. I prefer this over what Python does, which is to consider that an empty line is the end of a statement. Copy-pasting functions from a file to the REPL can be a bit tedious because of this. In OCaml, all I have to do is to add ;;<enter>, and I'm good to go.
There is a kind of "do notation" in OCaml with binding operators [1] (let*) for monads and (let+) for applicatives that is actually quite pleasant in practice.
As a matter of policy can HN please not accept submissions with non-https URLs?
It's a checkbox at most web hosts, built in to many reverse proxies, etc. There's no excuse for not offering htttps, particularly since it places users at risk if at any point along the path between them and you, there's someone untrustworthy.
One would argue that if you're that much at risk and still find time to click non-https-blog-posts the issue is in front of the screen, not in the connection
1. there’s no marker to indicate the end of let scopes
2. functions are bound with the same syntax as constants
He asserts that this is confusing. In practice - for the many issues I have with Ocaml! - neither of these are actual issues, in my experience, once code formatting is applied.
An actual serious problem with Ocaml’s syntax is that matches don’t have a terminator, leading people to mess up nested matches frequently. Pair that with the parser’s poor error reporting/recovery and things can become unpleasant quickly.