Elixir has a nice take on this with the `with` keyword/macro
with {:ok, file_handle} <- File.open(filename),
{:ok, contents} <- IO.read(file_handle),
{:ok, parsed} <- MyModule.parse(contents)
do
{:ok, parsed}
end
what this does is run the functions in order top to bottom, and if the return value from each function doesn't match with what is on the left, it returns early with the thing that didn't match, otherwise it continues.
This means you don't need to write each function to take a tuple of {:ok, value} and another clause to take {:error, reason}, you can just write your functions to take the value they care about and let pattern matching in the with block to take care of error propagation.
so if File.open returns {:error, reason} then IO.read never executes and the result of the with is {:error, reason}
It essentially means you can program the happy path and let the caller match on the sad paths (if they want to)
What if the things are all of a different type? This is dynamically typed? You have to look at the type of the thing to find out where it went wrong, and guess which failing expression it came from?
It's like halfway to reinventing exception handling.
with pat1 expr1, pat2, expr2 ....
do
happy case // all matched
catch // various unhappy patterns matched against mismatching expr
pat1 do ... end
pat2 do ... end
...
end
Just call it "else" or something instead of "catch" and then it doesn't look like exception handling.
Well, this isn't the only way to do things. There are exceptions in elixir and you can catch them if you want. But most functions that can fail have two versions
File.read() returns {:OK, contents} or {:error, reason}
So you can pattern match on the result.
File.read!() returns contents or raises an exception.
The first one allows you to use errors as values and handle the problem at the source. The latter assumes its going to be successful and either makes you catch the exception or, more likely for elixir, let the process crash.
The with statement is a good fit for certain domains where otherwise you might have a bunch of nested cases.
If you are worried about it returning a different type you can wrap it in the else block
with {:ok, bar} <- func(foo)
do
bar
else
value -> {:failed, value}
end
So you can make your failed cases more homogeneous to pattern match on. You can even pattern match on the different failed cases if you want to ensure more homogeneity.
If using with doesn't make sense for the domain there are other constructs in the language that will
The file handle is the PID of the process that opened the file, it monitors the process that asked for the file to be opened. and if that process goes down the file will be closed.
You can also add an else in the with so it looks like
with {:ok, fh} <- File.open(filename),
{:ok, contents} <- IO.read(fh)
do
contents
else
{:error, reason} -> File.close(fh)
{:error, reason} # this will return after the file has been closed
end
or maybe you would prefer to open the file, and pass that into the with and either way close the file. I just used IO as an example because they return nice {:ok, x} or {:error, reason} tuples, but this works with any pattern.
The benefits over an exception is as the caller i can pattern match on the result of this. so i could have this is in a function in a case
result = case with_example(filename) do
{:ok, result} -> result
{:error, :some_reason} -> # this is recoverable, do something else
#log the issue
recover!(filename) # bangs in functions indicate they can fail and raise an error
{:error, :another_reason} -> # this is unrecoverable
# log the issue
raise "unrecoverable error"
_ -> # any other case we don't know about
raise "unexpected issue"
or i might not care and want it to crash if it doesn't match
{:ok, parsed} = with_example(filename) # will raise a match error if {:error, reason is returned
Exception based error handling is so bad and unsafe that adopting functional error handling with Either, Try etc as implemented by functional addon libraries for many languages, while not yet common, in time it will become the new default even in OO languages. (just like it's been the default in functional languages for decades)
Functional error handling types are much simpler, safer and more powerful.
Simpler because they don't rely on dedicated syntax- they're just regular objects no different to any other object.
Safer because unlike exceptions, they force callers to handle all potential outcomes, but no more. (no risk of ignoring errors and no risk of catching a higher level of error than desired, ubiquitous bugs in exception based error handling)
Powerful because they support map, flatmap, applicative etc, making it easy to eg chain multiple computations together in desired ways, which is unwieldy and bug prone when using exceptions.
> What is wrong about dedicated syntax
It adds complexity to the language! It could be that, when learning Java, Kotlin and any other language, we learn that methods return what they say they do... and that's that. No weird dedicated syntax and magic, special treatment for returning anything other than the happy path, and the HUGE complexity that comes with it, eg the dedicated syntax itself and how it behaves, differences between checked and unchecked exceptions, hierarchies of exceptions etc etc.
> Exceptions are easier
But that's the point, they're not.
Exceptions based error handling is unnecessary, hugely complex, doesn't compose at all, obfuscates or straight up hides what can go wrong with any given call, so leads to countless trivially preventable bugs... I could go on. And after decades of use, there's still no consensus about what exceptions should be or how they should be used. Exceptions are a failed experiment and I have no doubt that in ten years, Java, Kotlin and many other languages will acknowledge as much and move away from it the same way Joda Time outcompeted and replaced the horrible Java date and time library.
Option and Result types, as implemented today in mainstream languages (ie. mostly anemically), are not the answer to exceptions being a mess.
Exceptions have a lot of additional functionality in larger ecosystems such as:
- Backtraces ie. showing the exact path of the error from its source to whereever it was handled, in a zero-cost way. This is by far the most important aspect of exceptions, as it enables automatically analysing and aggregating them in large systems, to eg. attribute blame from changes in error metrics to individual commits.
- Nested exceptions ie. converting from one error system to another without losing information. Extensible with arbitrary metadata.
- An open and extensible error type hierarchy. Again, necessary in large scale systems to differentiate between eg. the cause (caller fault, callee fault aka HTTP 400/500 divide), retryable or permanently fatal, loggable etc. exceptions while also maintaining API/ABI backward/forward compatibility.
(for some of these, eg. Rust has crates for a Result-y equivalent, but a community consensus does not exist, yet...)
General-purpose exceptions simply are complicated, and any system trying to "re-invent" them will eventually run into the same problems. Over-simplifying error handling just results in less maintainable, debuggable and reliable systems.
This isn't a binary choice. In Scala, you can use Throwable or Exception as your error type with Either:
Either[Throwable, Option[Foobar]]
The type Try[T] is essentially Either[Throwable, T]
Either[Throwable, T], Try, as well as IO from Cats Effect give you the stack traces that you expect from conventional Java style, with the superior option of programming in the monadic / "railway" style. Try also interfaces nicely with Java libraries: val result: Try[Foobar] = Try(javaFunction).
Don't agree with a single thing, especially not with the characterization that functional error handling is some kind of attempt at reinventing exceptions. But yeah, it's clear my and your camp will never agree lol. Fortunately for you, so far, your camp has mostly won, at least in the "object oriented" languages. But I think that's rapidly changing.
I am not in any sort of "camp", in fact I prefer using a mostly functional style. The above comment was based on experience working in large (~100M LoC) code bases.
As the comment clearly indicates, it is about anemic/"naive" functional error handling not being the counterpoint to general-purpose exceptions, not functional error handling vs. exceptions in general.
I do mostly prefer error handling being explicitly marked at every call site (ie. the functional style), but note that this is not always meaningfully possible in very large systems (at least beyond the notion of "I do not know exactly what errors are possible here, just propagate whatever happens" which is equivalent to regular exception handling)
And, as I already mentioned in the original, Rust does have functional solutions to some of these problems, and as other comments indicate, eg. Scala has them as well (probably even theoretically better since it can be a strict superset of the existing zero-cost exception model in the JVM).
The backtrace argument is good, but I wonder how valuable traces would be in a world that never experienced reads-of-nothing (npe, reading from undefined, reading out of bounds array, etc). Presumably this would be because of 100% use of ADTs, or maybe some other mechanism; but, even Haskell throws exceptions out of `IO a` so such a world might never be realized.
Exceptions give you a nice stack trace, and you can configure your debugger to trap when the exceptional case occurs, rather than having to reverse engineer what happened when you see the functional error value. This can all be fixed, but right now for all implementations of function error handing via maybe and such) are very painful to debug.
I'm happy with exceptions in high-level code, where like every line is possibly erroring out cause of a bad RPC, DB query, or even just bad integer math (div by 0 etc). If I'm writing a web backend or something, I don't want to manually handle every single error case. I just want to send back HTTP 4xx (or whatever equivalent in other protocols) when I can catch the problem and 5xx when I don't expect it. Probably most people in this situation agree.
This is one of my beefs with Golang. The `err != nil` stuff gets exhausting when it's every other line. I get it if you're writing lower-level systems like you might in Rust or C, but Golang is often used for high-level stuff. Java, JS, etc did the right thing for their target use case. JS actually went more towards exceptions when they added async syntax, and if anything I think Golang is going to be the one to cave eventually (like they did with generics).
And my day job used to use Erlang/OTP. It was a neat language with lots of great design decisions, but overall tedious for the type of feature work we were doing. It makes a lot more sense for things like ejabberd.
> I don't want to manually handle every single error case
I get this all the time from people who are used to exceptions, and it's based on a lack of understanding. If you get an Either<Error, User> from the db, you don't need to handle the errors in every method call in the service layer. You can just call map and that function will be applied in the happy case, no need to handle the Error case. Then, at the resource layer, you can simply
As for Go for HTTP REST APIs, agreed that it simply isn't a good tool for that. In fact, I don't think Go is a good tool for anything - for pretty much every use case, there are objectively superior languages, lol. But that's a whole nother can of worms.
In a language without exceptions, anywhere you call something that can return an error, you have to explicitly do something with it, even if you're just returning it early (which you usually are). That's what I meant by handling. Otherwise you're using exceptions, where early-returns are implied unless you catch. In your example, probably getUser is having to handle errors too, albeit with Erlang's rather slick syntax.
I've been on both sides of this, preferring explicit handling when I was writing systems code and preferring exceptions when I was writing web backends, so I understand how to use each. Exceptions are popular in high-level languages because that's actually what people want in those use cases.
I also dislike Golang for plenty of other reasons. "Master of none" language like you implied, and late to the party with nothing special to bring other than pointless syntax differences.
> In a language without exceptions, anywhere you call something that can return an error, you have to explicitly do something with it, even if you're just returning it early (which you usually are)
No? You just have to indicate it may exist, which is desirable. When you get user by user id from the db, there may be no such user, or there could be a db connection issue. The function should indicate that and compiler should check that you're handling it, not pretend nothing can ever go wrong. (which is effectively the case when you're hiding the unhappy path with exceptions) And there is no such thing as "returning early". In functional programming there is usually a single return statement with a chain IO or Eithers or whatever chained together and they decide themselves where they short circuit, not one return statement for if there's an error and another for a happy path.
In my example, Either<Error, User> might be what's returned natively by a functional db library that doesn't throw exceptions
but in languages without exceptions, like Haskell, it'd look like this
No exceptions anywhere, and you don't have to do anything with any exceptions. You could just send this IO back up to the http layer of your app and turn it into a 400 or whatever on any error if you don't want to disambiguate between them or handle them.
> You just have to indicate it may exist, which is desirable. When you get user by user id from the db, there may be no such user, or there could be a db connection issue. The function should indicate that and compiler should check that you're handling it, not pretend nothing can ever go wrong. (which is effectively the case when you're hiding the unhappy path with exceptions)
You don't pretend nothing can go wrong. Somewhere up the call stack, you're handling the exception. If nowhere else, whatever webserver lib will send 500 if you didn't catch something.
To give a concrete example with my daily work, we use C++ without exceptions for high-level code for some reason. Because practically every func we write returns an error, the result looks like this:
in every func down the entire call stack. And it was worse before we finally overrode the people opposed to macros:
absl::Status status = DoThis();
if (!status.ok()) {
return status;
}
absl::StatusOr<Val> val = DoThat();
if (!val.ok()) {
return val.status();
}
...
What's the value in putting this macro on every single line, to remind me that everything can fail? I know it can fail. This is like Bart Simpson with the chalkboard. Functional programming languages could have exceptions too if they wanted. It's just syntactical sugar around errors.
I mean if your opinion is "somewhere up the call stack, you're handling that exception" and "if you don't, the framework will 500" means you're not pretending nothing can go wrong, then we will never agree, because I absolutely would characterize both of those as pretending nothing can go wrong, at the very least in context at the function call site.
As for your code example, again, that is not analogous to the functional style. The functional style is you have an Either object no different to any other object. You do not if on the object, you do not early return the object in the sad case etc etc. Just throw all notion of all that out the window. You just pass it around without doing anything, or, if you only wanna do something in the happy case, you send in what to do using map, and then keep passing it around. There is zero requirement to handle the error like you seem to imply. Only at the outermost layer do you handle the error, and only if you wish to do so. (if you don't, just fold it into a 400 or 500 or whatever)
> I absolutely would characterize both of those as pretending nothing can go wrong, at the very least in context at the function call site
Agreed that within the function call site I'm pretending nothing can go wrong, but not within the entire program. I don't see what's unsafe about that. We're just now implying that anything can return an error instead of restating that fact every time, and instead you have to be explicit about catching it.
About the functional programming... you're saying there's no if-else or early return in functional programming, but Erlang, a functional programming language, has if-else and exceptions ("throwables") that look like Java: https://learnyousomeerlang.com/errors-and-exceptions. Under the hood it's one logical chain like you said, but we're using a lot of syntactic sugar on top.
> if you only wanna do something in the happy case, you send in what to do using map, and then keep passing it around
That's the kind of if-else early-return I mean, and it's probably what you want to do if you aren't using exceptions. My C++ code also (explicitly) passes the error around, albeit not in the functional way. In like 99% of the cases you are just passing the error up.
I'd add that they're easier because I don't know of any compiler that makes you handle every type of exception. I know a grip of compilers (and to be fair compilers with options and language extensions set) where you cannot not handle the "that didn't work, what now?" cases: the Left of the Either Left Right, Nothing of Maybe Value, Error of Result Error Value, None of Option etc etc etc.
Languages that force you to handle everything that can happen when you do a thing are an accessibility tool for me and my fellow smooth brains. If I can't see it in my buffer I'm not going to remember to check it.
Exceptions in Elixir are reserved for situations that are truly exceptional.
If IO might fail as a business concern, then we use File.read and pattern match on the return type where we can explicitly handle the error case.
Otherwise, if we know a file will always be there and IO is failing for reasons out of our control, that is truly exceptional so we can use File.read! which will throw an exception on failure. In Elixir we generally don't handle this, we just crash and a supervisor brings the process back up.
The file handle is the PID of the process that opened the file, it monitors the process that asked for the file to be opened. and if that process goes down the file will be closed.
You can also add an else in the with so it looks like
with {:ok, fh} <- File.open(filename),
{:ok, contents} <- IO.read(fh)
do
contents
else
{:error, reason} -> File.close(fh)
{:error, reason} # this will return after the file has been closed
end
or maybe you would prefer to open the file, and pass that into the with and either way close the file. I just used IO as an example because they return nice {:ok, x} or {:error, reason} tuples, but this works with any pattern.
In the examples yes, because it's just a simple binary ok or error case
but if you returned a list you can pattern match on an empty list, a single element list or a list that's longer. which you wouldn't do with exceptions in another language. that's the nice part about `with` that you can stop once you stop matching the pattern and return whatever you currently have, which in the list example may be a perfectly valid thing to return.
with [value] <- Module.some_list_function(arg), # can return an empty list too
[head | tail] = list <- Module.another_func(value), # can return a single element list
longer_list <- Module.takes_multi_element_list(list)
do
longer_list
end
There are loads of other examples for uses of this, or you can just write in an additional clause for your function that handles the other case and pipe it down the line.
It's about what makes sense for your domain.
I may not be explaining this well so apologies for any confusion
Railway-oriented programming is an interesting concept and it does have its use cases, but it does need to come with a massive health warning. I've often seen it used in practice to reinvent exception handling badly, and this is something I consider particularly ill advised because exceptions, when understood and used correctly, provide a much cleaner and more effective way of handling error conditions in most cases.
The thing about exceptions is that in most cases, they make the safe option the default. An error condition is an indication that your code can not do what its specification says that it does, and in that case you need to stop what you are doing, because to continue regardless means that your code will be operating under assumptions that are incorrect, potentially corrupting data. Error conditions can happen for a wide variety of reasons, many of which you do not anticipate and can not plan for, and in those cases the only safe option is to clean up if necessary and then propagate the error up to the caller. Exceptions do this automatically for you by default (you need to explicitly override it with a try/catch block) but alternative approaches, such as railway oriented programming, require you to add in a whole lot of extra boilerplate code that is easy to forget and easy to get wrong. If you can't handle the error condition on the way up the call stack, you would then log it at the top level and report a generic error to the user.
Having said that I see two particular use cases for this kind of technique. The first is situations where you need to handle specific, well defined and anticipated errors right at the point at which they occur. Validation is one example that comes to mind; another example is where you are trying to fetch a file or database record that does not exist. The second is situations where exception handling is not available for whatever reason. Asynchronous code using promises (for example with jQuery) are pretty much an exact implementation of railway oriented programming, but since modern JavaScript now has async/await, we can now use exception handling in these scenarios.
> Exceptions do this automatically for you by default (you need to explicitly override it with a try/catch block) but alternative approaches, such as railway oriented programming, require you to add in a whole lot of extra boilerplate code that is easy to forget and easy to get wrong.
The unfortunately missing part of exceptions (in mainstream languages) is that they handle this invisibly. Figuring out, at compile time, what sort of exceptions can appear inside a given function is not obvious.
That's the big payoff of ROP: you can look at any function signature and immediately know what sort of errors can come out of it.
Mitigating the downside of ROP (boilerplate) can be done to various extents, depending on the language. Haskell has do-notation. In F#, using the result computation expression [0] can make your code extremely clean:
type LoginError = InvalidUser | InvalidPwd | Unauthorized of AuthError
let login (username : string) (password : string) : Result<AuthToken, LoginError> =
result {
// requireSome unwraps a Some value or gives the specified error if None
let! user = username |> tryGetUser |> Result.requireSome InvalidUser
// requireTrue gives the specified error if false
do! user |> isPwdValid password |> Result.requireTrue InvalidPwd
// Error value is wrapped/transformed (Unauthorized has signature AuthError -> LoginError)
do! user |> authorize |> Result.mapError Unauthorized
return user |> createAuthToken
}
Could we do the reverse, i.e. mitigate the downside of exceptions? Is there a linter, code analyzer, or some other compile-time tool that can integrate with a Java IDE and automatically display the uncaught exceptions that might be thrown by a given line of code?
Java has/had a compiler check that forced you to write catch blocks or `throws` annotations in/on functions that call other functions which might throw.
The feature is called "checked exceptions" and I believe it has been discarded for its inconvenience by now.
Sometimes it feels like developers are going in circles while trying to find the most optimal way to handle errors.
Checked exceptions are one of the main reasons I’m sticking with Java, even though Java lacks the ability to abstract over sets of checked exceptions, which does cause some inconvenience. It’s unfortunate that no other mainstream languages have been taking that approach.
With result types, you typically don’t get automatic exception propagation. I agree that overall it’s a spectrum of syntactic convenience, checked exceptions effectively form a sum type together with the regular return type.
That sounds awesome! Do you have that flag set on a big codebase? Was it a big hassle to turn it on (like you had to remediate a bunch of code that didn't handle exceptions before you could check it in). Have you seen any big changes since enabling it?
People are now realizing that having the errors a function can cause right in the type system may actually have been a good idea, but when you point out that Result is not the only way and that Java checked Exceptions do the exact same thing (and so does the Zig error handling mechanism which is a third variant of the idea), they come up with all sorts of easily dismissable nonsense to explain why the two are very different.
Well, that's kind of true! The fact checked Exceptions are inconvenient doesn't change the fact they are equivalent to returning a Result type (the implementation is obviously different but I think we don't need to mention that).
A future version of Java could totally make it more convenient, and perhaps even make the implementation cheaper such that it would not just nearly the same , but literally the same as in Rust or other similar languages.
Does it even need a new language version? If the compiler already spits out the error "hey, your Fart() function should be annotated with 'throws ButtsException'", couldn't an IDE relatively easily be configured to automatically add the " throws " annotations?
I’m not the parent, but exception declarations are IMO necessary for a stable API contract. It’s exactly the same reason why return types are explicit. The actual issue in Java is that you can’t abstract over an arbitrary-length list (sum) of checked-exception types (variadic type parameters) (with the exception of rethrowing from multi-catch clauses).
> The unfortunately missing part of exceptions (in mainstream languages) is that they handle this invisibly. Figuring out, at compile time, what sort of exceptions can appear inside a given function is not obvious.
Figuring out, at compile time, what sort of exceptions appear inside a given function is a futile exercise in many contexts, and railway oriented programming does not fix it. Java tried this with checked exceptions and it fell out of favour because it became too unwieldy to manage properly.
In any significantly complex codebase, the number of possible failure modes can be significant, many of them are ones that you do not anticipate, and of those that you can anticipate, many of them are ones that you cannot meaningfully handle there and then on the spot. In these cases, the only thing that you can reasonably do is propagate the error condition up the call stack, performing any cleanup necessary on the way out.
"Handling this invisibly" is also known as "convention over configuration." In languages that use exceptions, everyone understands that this is what is going on and adjusts their assumptions accordingly.
> Java tried this with checked exceptions and it fell out of favour because it became too unwieldy to manage properly.
Because they did a half-assed job of it, and required the user to explicitly propagate error signatures. Inference and exception polymorphism are essential.
Checked exceptions always seemed to me to be an exercise of self-flagellation and enumerating badness; when most of the time there are a handful of specific errors that require special handling, with everything else logged/return error/possibly crash.
The problem is that the callee can’t decide for the caller which exceptions will require special handling. And for the caller to be able to make an informed decision about that, the possible exceptions need to be documented. Since this includes exceptions thrown from further down the call stack, checked exceptions are about the only practical way to ensure that all possible failure modes get documented, so that callers are able to properly take them into account in their program logic.
If you want to (and are able to) document all possible failure modes, then checked exceptions will give you that. As far as I can tell, railway oriented approaches can't.
Unfortunately, you can only do that when the number of possible failure modes is fairly limited. In a complex codebase with lots of different layers, lots of different third party components, and lots of different abstractions and adapters, it can quickly become pretty unwieldy. And then you end up with someone or other deciding to take the easy way out and declaring their method as "throws Exception" which kind of defeats the purpose.
> No; you simply abstract the underlying subsystem’s exceptions in your own types, the same way you do with any other type.
That's all very well as long as people actually do that. It doesn't always happen in practice. And even when they do, the abstractions are likely to be leaky ones.
> And yes, “railway oriented approaches” can absolutely do this.
How? Please provide a code sample to demonstrate how you would do so.
> That's all very well as long as people actually do that. It doesn't always happen in practice. And even when they do, the abstractions are likely to be leaky ones.
They don’t have a choice under “railway oriented” API in a typesafe language — they must translate the subsystem’s error types to their own error type.
If the abstraction is leaky, at least it’s well-specified.
How is that worse than having no abstraction at all, and leaving callers with no idea what error cases an API might raise?
> How? Please provide a code sample to demonstrate how you would do so.
In what language? What data structure?
If we assume Haskell and Either, then it can be as trivial as:
You adjust the reported failure modes to the abstraction level of the respective function, wrapping underlying exceptions if necessary. You don’t leak implementation details via the exception types. Callers can still unwrap and inspect the underlying original exceptions if they want, but their types won’t typically be part of the function’s interface contract, similar to how specific subtypes of the declared exception types are usually not part of the contract.
I think the conventional way exceptions are implemented is pretty bad.
First, a lot of languages make you use an awkward, unnecessary scope to catch an exception. e.g., you want to declare and initialize a variable to the value of a function that can throw (and assign some other value if it does. Well, you've got to split the declaration and initialization, putting the declaration outside the scopes try and catch create. That one's an unforced error -- languages don't have to do that to use exceptions, but for some reason many do. It's pretty weird to have to add homespun utilities for fundamental control flow scenarios.
But the bigger issue is that you really want to handle the error conditions at the lowest level where you have enough context to do so correctly. That's usually pretty low, but exceptions default to "send it all the way to the top". The default is either invisible or invisible in practice, depending on the language, and wrong, so programs end up riddled with these issues. You tend to end up with these higher-level functions that can throw all kinds of exceptions, many of which are meaningless to the caller. E.g. someone adds a file cache one day and all of a sudden some higher-level HandleRequest function can through a IO exception... because the cache code didn't handle it... because they never even realized it was a possibility. You couldn't design a better mechanism for creating leaky abstractions.
I think anything a function might return needs to be an explicit part of its signature, and a caller needs to handle it explicitly, even if just to indicate, pass it up the line. The langue doesn't need to require a lot of boilerplate to do this.
That's just my experience from having lived through it.
I think Rust has shown very well how ROP with first-class syntax support pretty much eliminates all boilerplate code. IMHO Rust nailed error handling with the `Result` type/trait.
It came to my mind too but then I got confused, what if the type of the Result changes along the function call chain and you want to propagate Errors with minimal effort?
Then I saw this stackoverflow question and it seems that ? operator does quite smart thing and is as easy to use as possible.
anyhow is the most commonly used crate to have type erased errors(1), nothing more then that but also nothing less
this means when returned form a library a Result<_, anyhow::Error> _is often an anti-pattern_ (often not always!)
but if you write an application it's pretty common to have many many places in the code where you can be sure that no upstream code needs more fine grained error handling (because you workspace is the most upstream code) so using anyhow is a pretty common and convenient choice
Though it's not unlikely for anyhow to fade into being mostly unused in the future with further currently missing rustc/std features, through not anytime soon.
But luckily this doesn't matter, due to how `?` works you can trivially convert errors on the fly no matter which (well kinda, there is an unlucky overlap between orphan rules and From wildcard implementations in the anyhow crate, but we can ignore that for this discussion).
(1): It's basically a form of Box<dyn Error + Send + Sync + 'static> which also has thin pointer optimizations and (can) by default include a stack trace + some convenience methods.
Sure, but the question was specifically looking for the "minimum effort" solution. I almost brought up thiserror but that just makes things more complicated. If you're writing a Rust application and just want to propagate errors, anyhow is currently the most popular way to do that.
anyhow is for type erased errors, which is mainly used for the kind of errors you mainly propagate upward without handling them in any fine grained way. It's mainly used in applications (instead of libraries). For example in a web server anyhow errors will likely yield Internal Serer errors.
thiserror provides a derive (codegen) to easily create your own error. It's much more often used by libraries, but if an application doesn't want to handle this errors they will likely be converted into anyhow errors. A very common use case is to apply it on an enum which represent "one of many errors" e.g. as a dump example `enum Error { BadArgument(...), ConstraintViolation(...), ... }` and it's no absurd in some cases to have a mixture e.g. an enum variant `Unexpected(anyhow::Error)` which represents various very unpexted errors which likely could be bugs and you might have considered panicing there but decided to propagate them instead to avoid panic related problems
I don't understand why this answer is buried deep in a thread & isn't included in the Rust Book, even though it's been conventional wisdom among experienced Rustaceans for a few years now.
Download counts don't mean very much here as I'm fairly sure both crates are common transitive dependencies. Or in other words, millions of programmers aren't individually choosing Anyhow or Thiserror on a monthly basis -- they're just dependencies of other rust crates or apps.
And agreeing with the other reply, nobody jumps up and down with joy when choosing an error handling crate. You pick the right poison for the job and try not to shed a tear for code beauty as you add in error handling.
In my mind, the difference between errors-as-values and exceptions is most useful when describing domain-specific errors and other issues that you have to handle in support of the domain/problem space. To me, domain errors make sense as errors-as-values, but your database being unreachable is unrelated to the domain and makes sense as an exception.
> another example is where you are trying to fetch a file or database record that does not exist
I think this depends on whether or not you expect the file/record to exist. Handling a request from a user where the user provided the id used for lookup? The lookup itself is validation of the user input. But if you retrieved a DB record that has a blob name associated with it and your blob storage says that a blob doesn't exist by that name? I find that to be a great situation for an exception.
The errors-or-exception line is fuzzy and going to be dependent on your team and the problems you're solving, but I've found that it's a decent rule of thumb.
"The first is situations where you need to handle specific, well defined and anticipated errors right at the point at which they occur"
Barring system level errors can you give an example of an error state that's not like that, that would then rather merit an exception? I would like to understand your point of view, is it due to the nature of the problem, or the constraints of runtime that make exceptions preferable.
In the C++ code I need to write, we can 1. check data for error conditions in the beginning 2. if we fail the error check, let application crash 3. use the found error state to debug and fix the error in the initial checking code.
The data my code needs to process is fairly straightforward - data abiding by some known CAD data format or given geometric topology, so the error conditions are "quite easy" to tackle in the sense that there is an understanding what correct data looks like in the first place.
Missing dependencies. External services having gone offline. Timeouts. Foreign key violations. Data corruption. Invalid user input. Incorrect assumptions about how a third party library works. Incorrectly configured firewalls. Bugs in your code. Subtle incompatibilities between libraries, frameworks or protocols. Botched deployments. Hacking attacks. The list is endless.
Probably not so much of an issue if you're dealing with well validated CAD data and most of your processing is in-memory using your own code. But if you're working with enterprise applications talking to each other via microservices written by different teams with different levels of competence, legacy code (sometimes spanning back decades), complex and poorly documented third party libraries and frameworks, design decisions that are more political than technical, and so on and so forth, it can quickly mount up.
> External services having gone offline, timeouts, and invalid user input are expected conditions you should handle locally.
Not necessarily. You should only handle expected conditions locally if there is a specific action that you need to take in response to them -- for example, correcting the condition that caused the error, retrying, falling back to an alternative, or cleaning up before reporting failure. Even if you do know what all the different failure modes are, you will only need to do this in a minority of cases, and those will be determined by your user stories, your acceptance criteria, your business priorities and your budgetary constraints. That is what I mean by "expected conditions." Ones that are (or that in theory could be) called out on your Jira tickets or your specification documents.
For anything else, the correct course of action is to assume that your own method is not able to fulfil its contract and to report that particular fact to its caller. Which is what "yeeting exceptions up the call stack" actually does.
> Almost everything else you listed represents a bug in your software that should terminate execution.
Well of course it represents a bug in your software, but you most certainly do not terminate execution altogether. You perform any cleanup that may be necessary, you record an event in your error log, and you show a generic error message to whoever needs to know about it, whether that be the end user or your support team.
Again, what action you need to do in these cases will depend on your user stories, your acceptance criteria, your business priorities and your budgetary constraints. But it is usually done right at the top level of your code in a single location. That is why "yeeting exceptions up the call stack" is appropriate for these cases.
You only terminate execution altogether if your process is so deeply diseased that for it to continue would cause even more damage. For example, memory corruption or failures of safety-critical systems.
> I’m more than a little shocked that you think yeeting exceptions up the call stack is appropriate for these cases.
I hope I've clarified what "yeeting exceptions up the call stack" actually does.
The alternative to "yeeting exceptions up the call stack" when you don't have any specific cleanup or corrective action that you can do is to continue execution regardless. This is almost never the correct thing to do as it means your code is running under assumptions that are incorrect. And that is a recipe for data corruption and all sorts of other nasties.
How do you know what to cleanup when you have no idea which APIs might throw, what stack frames might have been skipped when they do throw, and what state was left broken by yeeting a stack-unwinding exception up your call stack?
You clean up processing that your own method is responsible for. For example, rolling back transactions that it has started, deleting temporary files that it has created, closing handles that it has opened, and so on and so forth. You rarely if ever need to know what kind of exception was thrown or why in order to do that.
You can only assume that the methods you have called have left their own work in a consistent state despite having thrown an exception. If they haven't, then they themselves have bugs and the appropriate cleanup code needs to be added there. Or, if it's a third party library, you should file a bug report or pull request with their maintainers.
You don't try to clean up other people's work for them. That would just cause confusion and result in messy, tightly coupled code that is hard to understand and reason about.
Usually, no you don't. You only write a try ... catch or try ... finally block round the entire method body, from the point where you create the resources you may need to clean up to the point where you no longer need them. For example:
var myFile = File.Open(filename);
try {
while ((var s = file.ReadLine()) != null) {
var entity = ProcessLine(s);
// do whatever you need to do to entity
}
}
finally {
myFile.Dispose();
}
C# gives you the using keyword as syntactic sugar for this:
using (var myFile = File.Open(filename)) {
while ((var s = file.ReadLine()) != null) {
ProcessLine(s);
// do whatever you need to do to entity
}
}
It isn't in practice. Only a minority of methods actually need it.
It's certainly far, far better than having to add exactly the same check after every method call. Which is only what you need to do if you're working in a situation where exceptions are not an option.
I'll add that C# also has using statements that dispose the object when the current scope exits (including if it exits due to an exception) this significantly cuts down on ugliness .
C++/Rust are different because exceptions in those languages are expensive and culturally counter indicated.
For the runtime-hosted languages the author is talking about (JVM, CLR, Python etc.), optionally throwing an exception is much cheaper than constantly creating and unwrapping Result objects. Your example is a perfect case where one would prefer to throw: say you have a parser that parses your file and the parser is expensive because the files are large. You are better off throwing out of your parsing iteration then doing a Result.map in your hot loop. (However you might want to wrap the top level of the parser in a Result and return that.)
I disagree that exceptions are better in most cases. Exceptions aren't captured effectively in most type systems so it's hard to ensure you've covered all your bases. When used effectively, discriminated unions for return types force you to handle all the cases and the result is much more robust in my experience.
Honesty this whole site is a gold mine. Even if you are not interested in using a functional language, if gives you a different perspective which was very helpful for me. I would recommend other posts as well.
The series on building a parser combinator from scratch has been one of the most valuable things I've read and worked through. A lot of concepts and mechanisms from that have been just incredibly useful working with typed languages in a functional style. I still will never know what a monad is tho.
Does this argument hold: This pushes the error handling away from the call site, and that is a bad thing. Being that the caller knows best how to handle the errors, so it should instead of passing them down.
(Full disclosure, I have seen this talk and read Wlaschins book, but not in a long time, so maybe he did cover this and I forgot.)
Eg given:
validate
and-then update-db
and-then send-email
What about when validate fails? The error is returned to the caller, which is
maybe suitable. What about when update-db fails? Should there be a retry? Try
another service? Requeue? Tell the user? What about if the send-email fails?
Dont you end up with (except with more unwrapping and sub branches)
Monadic error-handing a win-win way of doing things.
People who dislike Go complain about being forced to check for error conditions too often (likewise with Java's checked exceptions.)
Using 'Either', you're not forced into checking; you can check, or you can let the caller handle it.
People who dislike Java's unchecked Exceptions complain that there's no way of knowing what will be thrown, or when. Using 'Either', this is made explicit.
> What about when validate fails? The error is returned to the caller, which is maybe suitable. What about when update-db fails? Should there be a retry? Try another service? Requeue? Tell the user? What about if the send-email fails?
The point is - you don't know! So your approach needs to be well-suited to not knowing, which Either is.
In my current Java codebase at work, there are different ways of 'handling' errors which have built up over the years. A call to a missing 'Limit getUserLimit(User)' might:
* throw a NotFound exception
* return null
* return a default
And you can't tell without diving in and reading the code. If it had been written with Either instead:
* the caller could trust it instead, rather than reading all the code that it calls
* easily convert it to another failure condition - the Either implementation in your language will have built-ins for 'orElse(null)', or 'orElseThrow(...)'
> Should there be a retry?
Either is an expression, which lends itself well to abstraction. This means you can likely write code to retry Eithers in general, as opposed to writing retry code for specifically inside your DbUpdater.
Yes, low level and high level error handling can coexist, but this style of programming works much better (in my opinion) as a blackboard exercise or a do-it-once-then-done situation.
Love me some DDD, but you gotta have some kind of reasonable maintenance cycle that a moron programmer like myself can manage. That was the beauty of TDD in mutable coding: it scoped down the cognitive load for maintenance.
I can see railway programming carrying a lot of context through a lot of transforms and hell if I'd want to be at the end of the railway getting a big trainload of business and system context I'm unprepared to handle.
DDD and onion architecture for the win. Best of both worlds, and you end up with a railway anyway; it's just outside the micro-apps, not stuck inside them. In my mind, you want railway issues both explicit and a bit cumbersome to code. Most of the time, if done well most railway coding decisions involve business decisions that you should never be using clever coding constructs to avoid in the first place.
ADD: To clarify, monadic programming is great but there's a temptation to use it to avoid necessary business decisions. Sticking ambiguity into a type system can lead you to some difficult or impossible situations later on. I'm not saying never do it. I'm saying most ways I've seen it done are not so good.
Low-level and high-level error handling can coexist. It's no different than having a specialized try/catch inside a larger try/catch.
You may, for example, choose to handle very fast transient errors inside your `send-email` function. If you manage to connect in under $acceptable-time, return to the happy path*, otherwise return to the caller.
The only hard and fast rule is that any handling requiring human intervention should definitely be wrapped up and passed up.
* but do log a warning, regardless of whether you're just doing classic impure logging, collecting the logs as part of the success value, or using a writer monad.
visually the classical try-catch makes error handling right there more verbose and less readable. I far prefer Rust-like passing result with maybe adding context to the error along the way than try-catch block at every step of long process that have errors that should be handled in-place. Because passing it up is still easy, but adding any context is much less verbose.
> The only hard and fast rule is that any handling requiring human intervention should definitely be wrapped up and passed up.
The "wrapped up" being the important and mostly ignored part. From my experience try-catch error handling usually generates incomprehensible errors where you can only guesstimate what actually happened from stack trace's function names.
To get good error message every layer should have try-catch block that adds context to the error (what operation did it, or what logged user did it etc.), but that's almost never done properly and we get stack trace vomits instead.
If you want it to be fixed by actual humans you need to
It's possible that you could end up with your example as given, but I think you could organize it differently so that it has your handling while also maintaining the same basic top-level pipeline.
Given what you've specified, I might organize it like:
You can wrap the error types into a discriminated union and then check for retry-able errors, and retry if its one of those, otherwise, propagate the error. Or if you don't need to do special handling for any errors, you can just propagate the whole error and let the caller handle it.
ROP definitely isn't useful in all cases, but it has served me quite well in many by simplifying error handling when I don't care at the callee what error occured.
I think it should be pretty obvious that for instance retry logic should be near the place in the code where the first attempt is made. The best case for gathering all errors in a 'railway' is when they can be handled uniformly. E.g., if we want to not retry things but instead report a descriptive error message to the user. Note that also that most error handling logic that should be near the calling place might also fail in more global ways that are then best handled using the railway/exception pattern.
This site has been the best programming education site I've encountered in terms of real, pragmatic concepts taught. My favorite is the concept of making invalid states unrepresentable[1] which I try to apply now regardless of language, though some make it easier than others.
What works even better is using dataflow instead of call/return, as the problem largely goes away by itself.
With call/return, you have to return something, so if you have nothing to return because of an error, you have to return that, or both the error and the normal return value (Go). This pollutes the happy path. With this polymorphic container through which you thread the remainder of the processing, you make the problem a little nicer, but it is still there.
With dataflow, you simply don't send anything to the next filter stage, so your happy path is completely unaffected. You then send the error to some kind of standard error output, which can often be centralised for your application.
Sounds to good to be true, but used it in Wunderlist, for example, and it worked like a charm.
Oh, and the same technique that works for when you don't have a value (errors) works the same when you don't have a value yet (async).
Define "fundamentally". In the end it all comes down to NAND gates... ;-)
But yes, it is very, very different. With an exception, which does achieve a similar effect of stopping further execution right there and then, you need to throw the exception and you need to check it. With dataflow, all you have to do is nothing.
And exceptions are quite a tricky and heavyweight mechanism. What about the intermediate code? Does it need to run despite the exception? Will it just swallow the exception instead of passing it on? You can just pass the final error handler in and set it on the filter and there is no trickiness.
And of course without the exception, there is a lot of code that needs to do error handling despite not really being involved. This becomes really, really noticeable when your async code is handled using callbacks. You just get two sets of callback everywhere. Just having the code be synchronous would be better, but with dataflow the whole issue just evaporated.
In the end it boils down to continuations: your function either either takes a single return continuation that takes result|error variant or it takes two (or more) continuations: one for the happy path, and one for the error path. The former maps well to result objects, the latter is maps better to non local (checked) exceptions, but it easy to see how to transform from one to the other.
Only if you deal exclusively with procedures/functions.
With dataflow, that's not the case. You have two filters, and if the first filter detects an error, it just does nothing (i.e it does not pass the value to the second filter). Done.
No need for exceptions or continuations, or weird return types.
Yes, anything might return an error. Do we need to really write `or error` or whatever everywhere that all we're going to do is pass the error immediately back up the stack?
One way to improve this would be to have some kind of way of declaring the atypical case where you have a function that can never throw an exception... which is a feature of some exception-based programming languages.
> Do we need to really write `or error` or whatever everywhere that all we're going to do is pass the error immediately back up the stack?
Right, exactly. Like in Go:
if (err != nil) return err
You find this littered all over every Go codebase. And somehow people seem to not realize that they’re doing the exact same thing as throwing an exception (unwinding the stack until something handles the error), just manually and painfully.
It can be important and reasonable. You might have things which never give errors. You might have things which sometimes do, and you attempt to use them in places which you don't believe to give errors. Here, the great thing about error functors is that the type system comes out and warns you very, very explicitly. Hell, we add that same boilerplate back in a lot of languages with things like "noexcept" and "throws", with the compiler's type checker doing the exact same thing and forcing error handling in a "noexcept" when it calls something which throws. You get an isomorphic system, it works the exact same way, it's effectively an error functor as far as the type checking is concerned.
That being said, this article sells the whole concept of functors incredibly short. You use them to represent certain concepts, such as in this example where the concept being represented is "value, but not everywhere in the domain" or "value or a different type of value". On top of that, these are applicative functors, meaning that they also represent a way of combining two values in a specific manner, and they are monads, meaning that they represent a kind of composition. That means that these same concepts of railway tracks suddenly allow for a lot larger set of tools: you can represent as a functor the concept of having multiple values, you can consider lists to be applicative functors with the combination of two lists being an operation such as concatenation, zipping of sequences, or even cartesian products. You can even go as far as to consider lists monads, in which case their compositional behaviour could be one of non-determinism: each function takes in a single value and evaluates to multiple values – the composition would be running the function over every value in the list (over all possible states), then concatenating the lists that come out (creating a new list of possible states). Treating errors and non-determinism as special cases of a similar behaviour allows for extending many of the methods we use in error handling to this kind of application as well. Not that these two are the only uses either.
It's a much larger toolbox, though going overboard with it will result in unreadable hellish code.
That's got its downsides. Majorly, I feel like it's restrictive. Exception handling, especially asynchronous exception handling, can really disallow recovery from errors. Sure, when you have a straight data stream and you simply want to stop processing at an error, there's no data for the next stage and it never runs. But that greatly limits recovery from errors. If your reaction to sqrt(-1) is to "crash", ending the business part of the program and hopping straight to printing or logging or whatever for the error, it can be very difficult to, say, enlarge the domain. You might have the need to simply give some use-case specific value for negative numbers, and you can pretty simply do that in code, but it'll result in some coupling you don't want as now you have to test for being outside that domain. For sqrt the domain is simple to define, but for a lot of real-world logic, it isn't nearly that simple. You can, in some languages, trap the error. Dataflow machines often are not designed for that though, especially when dealing with copious asynchronicity.
That's where explicit error types, such as Maybe come in. You get to write your happy path as if errors didn't exist, but function composition uses a different set of operators than you usually would. In cases where you do want to recover from an error, you handle both paths then and there, as part of the happy path, and possibly don't even allow for an error beyond that. Most importantly, to get any value which is not an abstract concept of "maybe value" out, you need to handle the existence of both paths gracefully. That can be very nice, and very useful.
That being said, now that you're using explicit error types, you can escape the "happy path"/"error" dichotomy. No longer is there necessarily just a "no value". There can be a "no value, but". Or a "value, but". You can have several errors stack up in a chain of things which could be done in parallel and then collated as a result. You can even entirely give up on the concept of errors, since that's only a very special case. You can use it to encode non-determinism: each unit takes in a single value and processes it to several possible and different values. Combining them with the monadic bind operator, each unit outputting a list of values has those values concatenated at the end to the list of all outputs, then the next function runs for each of those values the set of outputs of them are joined for the next stage and so on. This can be very, very useful for things like traversing graphs. You can, as Haskell programmers often do, use it to haul around a bit of "state" in a technically pure manner (purity is in certain cases very desirable). Perhaps the most infamous use of applicatives and monads in programming is the Haskell IO Monad, which encodes no real paths at all. IO simply is a virus which latches on to everything you do with it, and getting out of IO requires touching the outside world in an impure manner, which in Haskell can "only" be done while evaluating the expression called "main", meaning that the expression called "main" becomes your only point of contact to the outside world and the only way to "unwrap" IO values. Once again, that is for (obsessive) purity reasons. Alternative applicatives even allow for things as simple as an alternative applicative functor based on zipping instead of nondeterminism.
It's a surprisingly varied technique, going far beyond simple railroads, and offers a neat way to write "only" the happy path while staying pure (allowing simple equational static analysis and unit testing without the need for mocks to re-establish purity). It also offers you many other functorial tools which are all linked by a specific composition behaviour. Further, other applicatives provide extra tools when you just need a functor which represents a specific way to combine two values.
Using alternative methods, you often run out of ways to extend existing code, create something completely unreadable yet somehow isomorphic, or you create something readable and extensible, but due to the lack of purity inherent in some exceptional business logic, it becomes very hard to test without extensive mocking. Toeing the line between purity, readability and extensibility can be done in many ways, but functors and especially monads are among the S-tier when it comes to it. That being said, Haskell can get a bit goofy with it, by no means is purity an absolute value to be always chased. I kind of wish more """mainstream""" languages contained better monadic toolboxes for those times when you see an issue and you know you could solve it better than what you have to otherwise do if you only had the tools for it.
Using this style is infectious like any other monad, so all your business will look like this.
Stop reading if you are ok with this.
That said, let's talk about validation errors vs. exceptions.
Exceptions are - as the name implies - unexpected errors that unwind the stack to whoever catches them. Along the way any necessary cleanup is handled automatically.
Exceptions are for cases where the error occurring is outside the scope of the business function.
I/O errors come to mind, OOM etc.
Now if you want to VALIDATE your data, you can do this without resorting to exception handlers or monads: You turn the outcome to your functional pipeline into data.
Have validation errors? collect them in a set. Look at them after the pipeline completes.
Chances are you don't want to fail on the first one.
Need to do side-effects in a pipeline? Don't. Instead describe the effect with data, run them later.
Need to back out early out of the pipeline? Split the pipeline at that location, handle the result, stuff it into another pipeline.
Or use (shudder) a chain of interceptors that can decide if the pipeline should be continued along its happy path. If your language supports it, use pattern matching to make that decision. Note that the decision making to skip parts of the pipeline or the rest of it, is done outside your business function (which improves the chances of them being reusable because they will only be concerned with a pure data transformation and don't need to know about machinations like Maybe/Either or some such to signal things to the caller).
WOW. Great to simply see this being brought up and discussed at all. I spent a year reading through this site and working through examples. One of best F# resources.
Just the 'concepts' discussed on this site has helped me with 'functional' thinking in all languages.
If this is getting attention now (because this is old site).
Does this mean F# is gaining traction?
And to the subject, do programmers in other languages use 'railway' style error handling. Like Rust?
I have a strong background in C# for desktop applications (with a database server backend). Over the years my coding style in C# becomes more and more functional (such as using a lot of Linq, immutable classes, return types that potentially include detailed error infos, etc.). What me holds back from F# is that I typically spent most of the time in tailoring the desktop front end. Here F# does not seem to bring any benefit. As far as I know, all available production ready desktop frontends are not native F#, but plain old object-oriented code, mostly in C#, that are bound to F# by just an intermediate layer of glue-code. As soon as I need an even remotely sophisticated user interface customisation, I am back in object-oriented land and should do it best directly in C# again.
So in my particular case, F# needs a native UI framework before it could gain traction for me.
"do programmers in other languages use 'railway' style error handling."
I use that in C++. Basically I define a templated return type that can be parametrized using the return type T and the specific enumeration definining the result condition. Although you can do with std::pair<MyValue, std::string> In a pinch, where you return any error statements in the string and exit early if the string is non-empty (when all functions return types like that you can then bubble up the error statement up to top level).
I really like things like this. There is a python talk called "so you want to be a python expert" and i think i watched it every 6 months for the first few years of my journey and each time i realised i got more and more of it. I'm sure if i were to watch it again there is probably still something now that i could get from it.
really helpful posts. however, I wanted to save them as pdf's but your website's layout is broken. when you select print - it only wants to show wants on the screen not everythign on the page.
Thanks, I haven't considered printing as a usecase indeed, I could make that work at some point with some css tweaks I reckon. But until then, the raw html can be found here: https://github.com/chtenb/chtenb.github.io/tree/master/docs/...
They can probably be converted to pdf just fine, or you could just use the html
My comment was mostly made in jest. That being said, it’s not about being born with that knowledge, but about the reality that knowledge of what an oven and flour are is reasonable assumed knowledge for anyone who has decided to read instructions for cooking something.
...that said, functional programming and the university level words associated with it intimidate me and quickly go over my head. Mainly because I haven't got a practical application for it, and they're more abstract concepts. A railway switch? Sure, I can understand that. A monad? What? Why are you making up words?
(tongue in cheek, I've had a stint of Scala so I can sort of apply some of these things in practice. I just don't have the vocabulary, educational background, or practical applications)
This is probably the biggest problem with FP, and I love FP, and use fp-ts which is marred in category theory. It took me a while to understand concepts that are really quite simple but explained very academically.
So basically, monadic error handling, that is, using `Optional<T>` and `Error<T>` and just `map`ping everything (which is a nop on the nil and error variants)
Which is how things actually work in non-FP languages too. In those you’re _always_ within Exception<T> and MaybeNull<T> monads. It’s just implicit and we tend to just cross our fingers and hope it’s all well.
I don't use Haskell because it has a representation for values which might exist (all languages have that), I use it because it has a representation for values which do exist.
When he overlaid the gray boxes over the tracks I thought “that’s a VI!”.
I’d like to point that LabVIEW is not doing any of the fancy type stuff explained here (Either, Maybe, etc). It’s just that VIs (or functions in common parlance) can have multiple inputs and outputs, and LabVIEW’s graphical approach lends itself well to this use case. But LabVIEW’s type system is fairly primitive.
Think a lot of people think this is just using <Option>'s.
But really in Fsharp, you can build your own 'Monad's called Computation Expressions, and for error handling can pass along different information about the errors.
It doesn't have to be "Only" an <Option>, it can be <MySpecialOption>.
I implemented a data processing API for customers to write convert and validate their unstructured tabular data in a defined way. For each column a user could specify/override one of 5 functions.
cast: (str|null) -> T
compute: T -> T
validate: (T) -> Message[] // doesn't modify the value
serialize: (T) -> str // primarily used so dates can be formatted
Each function was guarded by a try/catch block. This allowed users to write simple functions and not have an edge case blow up all of processing.
It ended up working pretty well. I don't think it was quite railway oriented programming, more a carefully thought out framework.
In the JS/typescript world, neverthrow [1] may be the closest library for this. If anyone has good experience with something else, please reply. Not sure if neverthrow is the right choice after replacing all Errors in a 16k LOC engine library with it. It did added more "noise" to the code. Would a custom bespoke code/library add less noise? I'm not sure.
isn't this pretty much the Either monad except that it's more "linear" and less "wrapped"? (since we are not following the monadic bind but a bind that does continuation by case)
edit: I just saw author's post note. nice. I think this actually makes a good monad intro.
Hi HN! I know this is an inappropriate comment but I hope you have a good day today. Just feeling grateful to the universe. Hoping that anyone feeling down will catch a break, and everyone feeling excited will enjoy your weekend. Much love to all of you. (And if you ever need someone to vent to about random stuff, DMs are always open — please remember to prioritize yourselves above other life considerations once in awhile. It’s not selfish.)
Anyway, back to your regularly scheduled programming. Radio announcer voice: you’re listening to smooth jazz^W^Whacker news…
This is the kind of uplifted positivity of light and goodness that I like to see on the web.
I think society, television, films and books focus on the opposite of utopia and negative things and dark things. But they ignore all the blessings and positive things. LOVE, gratitude and kindness and light, and faith are what matter and what we should be focused on.
Why would you embrace something that is darkness when you could embrace goodness and light?
Your attention should be on good things 100% of the time. Reacting to a bad situation or something negative, in a good, positive way. Not a negative way.
I have lost count of how many times people have asked me, on my language-ext [1] issues pages, if the library supports "Railway Oriented Programming". The documentation is clear, it is an FP library with many monads, functors, etc.
Inventing new terminology for existing concepts just creates even more confusion.
Creating completely new terminology for existing concepts just because you don't like the words is utterly baffling. Learn them. If you're going to teach monads then by all means use railways as an analogy, but don't call it "Railway Oriented Programming", you are just misleading the reader and not helping them communicate with others in the FP community.
Did anyone know what 'polymorphism' was when they first encountered OOP? No, they learned what it meant. Learn what 'monad' means, learn what 'functor' means, learn what 'applicative' means. You'll be enlightened.
OOP people: not everything needs to be 'oriented' ;-)
Sure it's a metaphor and as I state, there's nothing wrong with using the railway analogy, but renaming 'monads' as 'Railway Oriented Programming' is having the effect that newbies to FP-land think that's what _programming-with-monads_ is _called_.
Then they try to have discussions with others in FP-land and realise they don't have the lexicon to talk about FP because they've been taught some spurious terminology. This is a monad tutorial, it is over 150 slides worth of explanation (probably the largest 'Yet Another Monad Tutorial' I've seen yet), there's no reason to leave the reader thinking they've learned some new concept called 'Railway Oriented Programming'.
When learning about functional programming, the metaphors did help. But then later it was hard to recognize them in other languages that used different metaphors for same thing. (maybe all language is metaphor?).
So if F# Computation Expression, was little more expressly outlined as a 'Monad', and how, maybe that would help. It just can't be all at once when learning.
Same for <Option> and error handling. There is the baby-step phase, then growing into details.
So this railway presentation might have been helped by a little cross-connecting the 'simple helpful metaphors', with the 'technically correct words you'd find in a math book or something'.
Like a few slides at beginning or end, with further reading or examples showing 'simple metaphor' = 'over complicated word'.
This does happen in objects.
How many 'typical introduction' books spend a lot of time talking about Ducks and Dogs, and Quacking and Barking, Is-A, Has-A, before it gets into technical terminology?
People still talk about 'Duck Typing'. Not the technical word. Maybe that is bad also.
One series that I thought did this really well was the Eric Meijer (father of LINQ in C#) lectures on FP fundamentals [1]. It was very much for OO programmers to get up to speed with FP and it was taught using Haskell.
He goes through the series, building up a parser in Haskell, and then at the end (or a reasonable way through) he says "and that's a monad". Never once mentioning it until the watcher had grokked all the concepts up until that point.
That 'big reveal' idea I think is probably the best way to do a monad tutorial, because it forces the tutor to stick to the motivations and the fundamentals.
Specifically this is the error/result monad. This was the first description that made sense to me as a newbie to all of this. Monad was a scary word. This helped me understand.
if people cannot learn the concepts when presented with the monad and functor names, but somehow are able to understand them when given different names, it seems the issue is the names are bad
what is creating confusion is this need to sound like a pompous academic by sticking to old terminology that serves no purpose but to make their users sound enlightened and the subject hard to grok
the functional programming community could gain a lot from making it's subject more accessible, not less
These bear no relationship to the real world usage of these words and some are straight out of mathematics/academia also.
Would you expect FP people landing in OOP land to rename all of these concepts just so FP people can understand OOP languages? Or would you expect them to learn the shared lexicon of OOP-land?
It isn't pompous to have a name for something and then expect people to learn those names if they want to learn the subject. Pomposity would be attaching "oriented programming" to the end of all concepts that you don't understand and telling the world to use your new terminology whilst trampling over the existing community.
Clearly grokking monads is hard. But it's nothing to do with the terminology. This presentation has over 150 slides - so even when it's called 'Railway Oriented Programming' it takes 150 slides worth of exposition to get the point across. I'd argue that the reader/listener would be in the same place if they'd have used railways as an analogy whilst telling the reader that they're learning their first monad.
'Monad' may well be an awkward word but it's clearly the mental model of the monad that's the problem. I suspect (as someone who's taught how monads work many times) that part of the problem is that once we learn how monads work, we realise they're unbelievably trivially simple. And at that moment we instantly lose the ability to explain it to somebody else because of the 'mental perspective switch' that's just happened in our brains.
So we end up with a 1000 'Yet Another Monad' tutorials. This may well be the best one. But it shouldn't mean you get to change the shared lexicon.
When I was trying to learn monads no one ever gave me an example of mapping a list-returning function to a list. Everyone went on about 'effects', monad laws, etc. For some of us it's best to work backwards from something concrete and then show why the laws are important.
I love how simple most things are in FP compared to OO. I also hate how poorly FP concepts are explained.
I'm not saying you're wrong about people learning terminology. However, you definitely get more strange looks from FP terms than OO terms. Class, interface and object are common terms. When I mention a word like 'monad', 'monoid', 'magma', or 'functor' people look at me like I'm nuts. It's not logical. A new word is a new word. It's just FP words sound almost alien and trigger extra confusion in people.
Hot take: some of those names are bad too (e.g. 'polymorphism' is very vague), and OOP would be easier to learn if there was a beginner's vocabulary.
Yes, "dynamic structural typing" is a coherent and logical name when you're familiar with type theory.
But a learner can understand "duck typing" much faster. The jargon can come later.
> 'Monad' may well be an awkward word but it's clearly the mental model of the monad that's the problem. I suspect (as someone who's taught how monads work many times) that part of the problem is that once we learn how monads work, we realise they're unbelievably trivially simple.
I'm on team "join is way easier to understand than bind".
none of it matters if you hope more people use and understand functional programming
you overestimate how much people actually understand oop vs just winging it, in particular, inheritance and polymorphism
but back to fp, if monads are, and i'm growing to agree with you this, "unbelievably trivially simple," how can it be that "grokking monads is hard"?
i posit that it is only "hard" because when learning about these concepts, we read shit like "a monad is a monoid in the category of endofunctors" - that's an exaggeration but one that clearly shows how absurd some fp texts sound to outsiders
edit: i get worked up with folks that write about fp the way you argue it must be written. for the few concepts i have been able to grasp i have been thoroughly impressed, but i am being impedded from learning more, or faster because of the hard to parse lingo
> none of it matters if you hope more people use and understand functional programming
On a personal level, I couldn't care less tbh. If people want to learn it, that's great, if not that's also fine. There's plenty of room for procedural, OOP, and FP to exist side-by-side. If one domain feels complex to you, don't do it (or put extra effort into learning it), but I'd argue that OOP is much more complicated - it's just that most devs tend to grow up with OOP and so the context switch to FP is more difficult.
> you overestimate how much people actually understand oop vs just winging it, in particular, inheritance and polymorphism
Forget 'polymorphism' for a second and think about 'object'. An object in the real world doesn't have behaviours attached to it. An object in the real world doesn't mutate, in-place, in a discreet portion of time. Objects in the real world have a immutable past, a present state, and multiple possible futures based on interactions with external events. Those interactions is literally how we define time. OOP (as it's commonly practiced) does away with time and has myriad complexities because of it.
The trivial OOP explanation "an object is like a thing in the real world. A rabbit is an object, a triangle is an object, etc." combined with the unbelievable complexity artefacts (in place mutation, hidden state, attached behaviours, etc.) - is much, much worse than the upfront cost of learning about: pure functions, monadic composition, etc. (IMHO of course).
My argument would be that the so called simplicity of OOP terms and the alleged simplicity of learning them actually comes with ton of baggage. The upfront effort with monads or any of the other FP concepts at least is rewarded with code that's more 'honest', robust, and can properly model time.
> i posit that it is only "hard" because when learning about these concepts, we read shit like "a monad is a monoid in the category of endofunctors" - that's an exaggeration but one that clearly shows how absurd some fp texts sound to outsiders
There's probably an element of that. There's definitely two camps in FP. Those that come at it from an academic standpoint and they think about the abstraction and what it means for composition and the like. Then there's the jobbing FP peeps - who actually use it in real world code - they might think in terms of concrete monadic implementations, like List, Option, Either, etc. but also larger more domain specific monads: like a FrontEnd monad, or a DataLayer monad.
To fully grok monads (or at least to get the most out of them) you kinda need to know both and why they can be useful to you. So, maybe there's an element of that. Because a monad can literally encapsulate any behaviour you want it's sometimes quite hard to talk about their possibilities without going into the theory.
But I think there's another aspect. In languages that have first-class support for monads ('do' notation in Haskell, LINQ in C#, Computation Expressions in F#, etc.) they don't actually work like any other paradigm in programming. The idea that something runs 'in between the lines' of your code and the 'thing' that runs is the flavour of monad you're in; is just different to say a visitor pattern, or an adapter, etc. that have explicit invocations.
So yeah, I think it's just one of those things that takes a bit of time to get in your head. But once you do the possibilities are enormous, you give whole sections of code a 'flavour' rather than it being an explicit invocation of a behaviour. It adds a completely new tool to your programming toolkit: one that's unlike any other.
We see this turning up in things like async/await. Most people understand that once you have some awaitable code, everything around it becomes async 'flavoured'. That's what happens with all monads. An Option monad will make the whole code block optional, a List monad will iterate a list for every line of code - and the result is a List, a domain-specific monad might carry the configuration of the application, or a database connection string, so you don't have to do it manually, etc. The monad is simply the encapsulation of the flavour.
The phrase "a monad is a monoid in the category of endofunctors" is more of a joke than anything else from 'A Brief, Incomplete, and Mostly Wrong History of Programming Languages' [1]. It may actually be true, if you want to see how that phrase comes about it's worth watching Bartosz Milewski's Category Theory series [2] , he gets to it about 10 episodes in, but you need to watch them all to understand it.
However most FP programmers wouldn't know what that phrase means. It's absolutely not required to learn CT to know how monads work, or even what makes the abstraction so powerful. The same with other terms inherited from CT, like 'functor', 'monoid', 'polymorphism' (!!!), etc. The programming language version of these things are not the same as the maths versions - although they're clearly inspired by the maths.
Having said all that, I was surprised at how simple CT was when I started learning it. I am certainly no CT expert, but there's literally 3 or 4 rules to learn and you're done (in terms of what's useful for programmers). There's lots of higher maths stuff that's coming out of it that's mostly irrelevant for programming, but one thing I got from it was a new way to think about structure.
It's like a 10,000 foot view of the schema of an application where you stop thinking about the data and you start thinking about the relationships between types. This is a powerful tool when trying to get a handle on the complexity of a system.
You can't write code in CT, but, just like with monads, knowing it and knowing some of the theory behind it, gives you some programming superpowers (I know that sounds grandiose, I just couldn't think of a better description).
And the end of the day everything in FP is about function composition. Monads are functions, functors are functions, monoids are functions. The name attached just describes the shape of the functions needed for the composition to work.
Understanding why those shapes are useful is learning your craft.
I think it's quite common pragmatic patterns have multiple names and parallel theoretical frameworks within to understand them.
"A monad" is just a name to the applied pattern. There is nothing called "monad" in C++ for example, but you can write expressions in C++ that are isomorphic with monads. That does not mean that "monad" is the only correct name for the pattern, given people are familiar with different names and theoretical frameworks for the given pattern.
The fact that the pattern is recognized as 'monad' is really usefull of course, since that implies all sorts of other things that may turn out to be usefull. But it does not mean IMO it's wrong to call it by some other popular name.
The analogue in mathematics is rotation in a plane of the unit vector. We can present it as pair (cos(alpha),sin(alpha), a 2x2 matrix (m00,m01,m10,11), or even a complex number e^i alpha.
"I'm rotating a vector" ... "no no, you are perfoming a matrix multiplication" ... "don't be silly, it's a complex number"...
This means you don't need to write each function to take a tuple of {:ok, value} and another clause to take {:error, reason}, you can just write your functions to take the value they care about and let pattern matching in the with block to take care of error propagation.
so if File.open returns {:error, reason} then IO.read never executes and the result of the with is {:error, reason}
It essentially means you can program the happy path and let the caller match on the sad paths (if they want to)