I thought this was one of the notably bad talks this year. The whole premise that a function of Maybe a should be a function of a without an API change is neither intuitive to me nor really justified by Hickey. Different things are different. It's sad to see someone build such a wall around himself when faced by something (type theory) that he doesn't understand.
>that a function of Maybe a should be a function of a without an API change is neither intuitive to me nor really justified by Hickey
He spent many minutes motivating it. If you support some functionality to a client (as a library, or a service, or more abstractly), and then later want to relax the constraints that you require from those clients, this should be an acceptable change that they shouldn't have to worry about / be broken by. Like if all of a sudden some REST api no longer requires a specific HTTP header to be present in requests, then this change shouldn't break clients that have been including that formerly-required header all along.
Similarly, if you provide something to clients, and you add functionality to provide a response in all circumstances, not just some, then this should not break clients either.
This clearly is not true of `Maybe` / `Option[T]` and I think it's a pretty fair critique. Maybe we should be using Union types in these situations instead.
His argument for spurious API breakage is strictly logically correct, but seems practically dubious to me. When have you ever had to unnecessarily break API compatibility because something you thought was an Option[T] result turned out to be really a T, always? Maybe I'm wrong and someone will post some convincing examples, but I currently don't see this as a problem that needs solving.
Union Types don't compose straightforwardly because they flatten; maybe this is desirable in some cases but the rationale in Hickey's talk leaves me unconvinced. The only practical use case for union types I'm aware of is smoother interfacing to dynamically typed APIs; any nice examples for something beyond that?
> When have you ever had to unnecessarily break API compatibility because something you thought was an Option[T] result turned out to be really a T, always?
That would be a breaking change. And should be, if you're into that sort of thing.
The objection is to the opposite case: What was a T is now an Option[T]. I don't know Scala specifically, but that's a breaking change in every typechecked language I know. Rich is arguing that it shouldn't be. But it could be possible even in typed languages through union types. For example, you can do this in TypeScript by changing T to T | undefined, which is a superset of T.
Nope, it's not the opposite case, I was just to lazy to spell it out. Which way around it is depends on whether it's a return value or parameter. Covariant vs contravariant. If it's a parameter an API change from T to Option[T] shouldn't break (you require less), whereas with a return type it's from Option[T] to T (you promise more).
To be fair the "result"-part in "something you thought was an Option[T] result turned out to be really a T" makes it sound like you were speaking of the return-type to me as well. I appreciate the elboration though!
The sad thing is that Rich Hickey had some very good videos when Clojure was a new thing back in 2008–2009. Unfortunately, I've disagreed vehemently with most of his talks since then. In this case, it's completely illogical that a function `Maybe a -> b` should be callable as if it were a function `a -> b`. Do you want to know how I know? Because it would be just as illogical to allow a function `Vec a -> b` to be called as `a -> b`. And Rich must agree because Clojure itself does not support that!
I've learned that videos of his talks are just not worth my time.
Why is it illogical to say that a Maybe a -> b should be callable as if it were a -> b?
His point is that Maybe a should be composed of all the values of a, plus one more value, nil. A value of type a is a member of the set (nil + a). Why should having a more specific value reduce the things you can do with it? It breaks composition, fundamentally. It's like saying (+) works on integers, but not on 3. I'm saying this someone who really enjoys type systems, including haskell.
> His point is that Maybe a should be composed of all the values of a, plus one more value, nil
No, that's a simple union type. There are very good reasons for Maybe to be different than unions (Maybe can nest meaningfully, simple unions can't.)
Maybe Maybe a is valid, and often useful, type.
Of course, if you have a function of type a -> b and find out you need a more general Maybe a -> b, instead of a breaking change, you just write a wrapper function that produces the correct result for Nothing and delegates to the existing function for Some(a) and you're done without breaking existing clients.
(Now, I suppose, if you're u had something like Scala implicits available, having an implicit a -> Maybe a conversion might sometimes be useful, though it does make code less clear.)
I agree that there are reasons for Maybe a to be a different type from (a | nil) but there are also good reasons to prefer (a | nil). Like most things, it's a set of tradeoffs. What I appreciated about this talk was that he went into the benefits of thinking about types in this way. It's (relatively) common to see the benefits of Maybe a explained, but more rare to see the benefits of full union types explained.
My problem is that I don't know when to expect nil from a call because in Java null is part of every type and you can happily receive a null from anything, the compiler won't give you a warning. In OCaml I know what to expect because Some x | None is super simple to reason about. I can never receive a null a nil or other things that somehow satisfy the type requirements. Clojure is great with untyped programming everything is an Object after all but I still would like to see a reasonable thing like Erlang's {:ok,x} | {:error, error} or OCaml's Some x | None. It is not an accident that many languages that like reliability implemented it like that.
Yes, the default of a lot of languages, (Java, C, etc) where nil is implicitly a member of every other type is a bad default. But that's a separate question.
> Why is it illogical to say that a Maybe a -> b should be callable as if it were a -> b?
Fundamentally because it would require you to conjure up a value of type b from nowhere when the Maybe a is Nothing. If we view the function type as implication this would not be valid logically without some way of introducing that value of type b.
You could imagine some function from Nothing -> b that could rescue us. But since it only handles one case of the Maybe type, it is partial (meaning it could give undefined as an answer). There is basically two total functions that we could change it to:
* Maybe a -> b in which case we are back where we started.
* Unit -> b which essentially is just b, which can be summed up as meaning we need some kind of default value to be available at all times.
So to be able to call Maybe a -> b as a -> b you would need some default value available at all the call sites for a -> b
Now this is only "illogical" because we don't postulate a value of type b to be used as this default.
> It's like saying (+) works on integers, but not on 3
No, it's like saying (+) must work on all integers AND a special value nil that is not like any other integers, but somehow included in them and all other data types. We can't do anything with this nil value since it doesn't carry any data, so in the case of (+) we would essentially have to treat it as an identity element.
This is good though, since (+) has 0 as an identity element, so we can just treat nil as a 0 when we encounter (+). However, when we want to define multiplication we still need to treat nil as an identity element (since it still doesnt carry any data), except the identity element for multiplication is 1. This would be repeated for every new function that deals with integers.
So by mashing together Maybe and Integer we have managed to get a frankenstein data type with an extra element nil which sometimes means 0 and sometimes means 1.
Why not just decompose them into Maybe and Integer and supply the default argument with a simple convertion function like fromMaybe?
(FWIW, I actually agree with Hickey that using Maybe in api design is problematic and I've encountered what he's talking about. But while that might be an argument for where he wants to take Clojure, it's not an argument for dismissing type theory the way he does.)
Yeah, you're right, I got confused when interpreting the parent comment. Thanks for pointing it out!
I guess I overlooked it because the other way is so logically trivial, since it basically boils down to A -> B => (A || Nothing) -> B, which is just an or-introduction. So if you wanna implement Maybe generically the "work" lies on the other side.
But since Hickey's argument sort of is that we shouldn't implement Maybe generically, I guess my argument here becomes circular. (Begging the question maybe?)
> I guess I overlooked it because the other way is so logically trivial, since it basically boils down
Yeah, that's (part of) Hickey's point. That the "best" type systems fail this test, and require manual programmer work to solve this problem. Again, I'm saying this as someone who really appreciates Haskell.
I think Rich does not like Some x | None because he does not like simple pattern matching too much. This is why Clojure does not have a first class pattern matching syntax (you can emulate, and there is a library and it is just X amount of lines, etc. but still).
In this regard I really like OCaml:
let get_something = function
| Some x -> x
| None -> raise (Invalid_argument "Option.get")
This is very simple to understand and reason about and very readable. The examples Rich was trying to make in the video I could not tell the same about. He kind of lost me with transducers and the fact that Java interop with Java 8 features is rather poor.
I was surprised it was a seperate library in Clojure and doesn’t seem to be something that gets used much. Puts me off that it’s missing one of the most attaractive features of functional languages.
Because it would be just as illogical to allow a function `Vec a -> b` to be called as `a -> b`. And Rich must agree because Clojure itself does not support that!
Maybe Clojure's standard library just isn't that focused on vectors? Python stdlib doesn't support this as well, but NumPy does.
Further his exposition on `Either a b` was built on a lack of understanding of BiFunctors.
The icing on the cake was his description of his own planned type theory. What he described was, as I could decipher from his completely ignorant ravings, a row-based polymorphic type system. However he passes off his insights as novel rather than acknowledging (or leveraging) the decades of research that have gone into type theory to describe the very system he is trying to build.
Worse, he continued to implore his audience to spread FUD about type theory, claiming several times, "it is wrong!"
Well, in his defense, type theory is supposed to make software engineering simpler, not more difficult. So if even he doesn't understand it, then how can we expect a random programmer to?
Well software engineering is the only engineering field that produces half baked unreliable crappy solutions continuously. Other fields cannot afford such attitude towards products. I think a simple (inferred) type system is pretty useful to increase correctness and it helps you to increase reliability (even though it does not avoid all of the mistakes you can make in software).
Rich has taken the time to explain his thoughts very carefully, and his clear about what his experience is, and domains he is talking about. Whereas I see a lot of vague objections without specifics and that aren't backed up by experience.
You're right: there isn't much substance in my comment. I just meant to qualify the parent comment by saying that not everyone found this to be a "Best talk of 2018". A lot of good arguments for both sides were made in other places and it felt a bit obnoxious to reopen the argument here, where it's off topic. I should have simply stated that this is a controversial talk instead of adding my two cents.
Here's my understanding of one of your points: required fields in a data serialization format place an onerous burden on consumers. So in proto3, every field is optional, and this permits each consumer to define whats required for its own context.
Unfortunately, I can't find any connection between the dilemma at Google and the suitability of the Maybe type. You say this:
>The issue is that the shape/schema of data is an idea that can be reused across multiple contexts, while optional/required is context-specific. They are two separate things conflated by type systems when you use constructs like Maybe.
I agree - the value of a field might be a hard dependency for one consumer and irrelevant to a second consumer. But Maybe has nothing to do with this. If the next version of protobuf adds a Maybe type, it would not obligate consumers to require fields that they treat as optional. It would just be another way to encode the optionality, not optionality as a dependency but optionality of existence. A required input could still be encoded as a Maybe because the system can't guarantee it's existence. So Maybe is simply an encoding for a value that isn't guaranteed to exist. And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.
A second point that stuck out to me:
>I didn’t understand “the map is not the territory” until I had been programming for awhile. Type systems are a map of runtime behavior. They are useful up to that point. Runtime behavior is the territory; it’s how you make something happen in the world, and it’s what you ultimately care about. A lot of the arguments I see below in this thread seemingly forget that.
Your worldview here is very different from my own, and perhaps while this difference exists, there won't be much mutual understanding. I don't find any relationship between types and anything I understand as "runtime behavior". Types are logical propositions. The relationship between programs and types is that programs are proofs of those propositions. Runtime does not enter into the picture. That's why constraint solvers work without running the program.
If that was your intention, simply saying "I didn't find this talk useful" would suffice. It's not necessary to say that "Rich Hickey doesn't understand type theory".
I would say that "X doesn't understand type theory" is becoming a common form of "middlebrow dismissal" [1], which is discouraged on HN.
And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.
No, in protobufs, the presence of fields is checked at runtime, not compile time. So it's closer to a Clojure map (where every field is optional) than a Haskell Maybe.
This is true even though Google is using statically typed languages (C++ and Java). It would be true even if Google were using OCaml or Haskell, because you can't recompile and deploy every binary in your service at once (think dozens or even hundreds of different server binaries/batch jobs, some of which haven't been redeployed in 6-18 months.) This is an extreme case of what Hickey is talking about, but it demonstrates its truth.
I don't find any relationship between types and anything I understand as "runtime behavior".
Look up the concept of "type erasure", which occurs in many languages, including Haskell as I understand it. Or to be more concrete, compare how C++ does downcasting with how Java does it. Or look at how Java implements generics / parameterized types.
I haven't seen any of Rich's contributions to any major open source Haskell project. I can't really speak to his experiences with Haskell in proprietary code bases. Has he been a prolific Haskell contributor/hacker/user?
From his talk he hasn't convinced me that he understands Haskell's type system. Not only does he misunderstand the contract given by `Maybe a` but he conflates `Either a b` with logical disjunction which is definitely a false pretense. He builds the rest of his specious talk on these ideas and declares, "it [type theory] is wrong!"
He goes on to describe, in a haphazard and ignorant way, half of a type theory. As I understood it these additions to "spec" are basically a row-based polymorphic type system. Why does he refuse to acknowledge or leverage the decades of research on these type systems? Is he a language designer or a hack?
I can't even tell to be honest. He has some good ideas but I think this was one of his worst talks.
I think this comment is a bit too harsh, and I would rather we just discuss the points he makes, not his credibility. The man has decades of experience in software system development and architecture, and has built one of the most popular programming languages in the world, as well as Datomic. If that hasn't given him the right to give his opinion during the keynote of a conference for the language he built, then I don't know how much you want from him.
To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?
Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.
> I think this comment is a bit too harsh, and I would rather we just discuss the points he makes, not his credibility.
I was trying to address the points he made but parent appealed to his authority which I haven't found convincing.
> To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?
I have been a user on a commercial project. It's a fine enough language. I wouldn't call myself an expert. And I haven't given a keynote address where I call out Clojure for getting things I don't understand completely wrong.
> Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.
His point about Maybe was misguided at best. The function `a -> Maybe a` is a different function than `a -> a`. Despite his intuition that the latter provides a stronger guarantee and shouldn't break code, callers may be expecting the Functor instance that the `Maybe` provides and therefore is a breaking change.
His point about `Either a b` was perhaps further from the mark. It is not a data type that represent logical disjunction. That's what the logical connective, disjunction, is for. Haskell doesn't have union types to my knowledge. Either is not a connective. It's a BiFunctor. His point that it's not "associative" or "communtative" or what-have-you simply doesn't make sense. In fact he calls Either "malarky" or, charitably, a "misnomer."
To his credit he says he's not bashing on type systems, only Maybe and Either. But he later contradicts himself in his conclusions. He goes on about reverse and how "the categoric definition of that is almost information free." And then, "you almost always want your return types to be dependent on their argument..." So basically, dependent types? But again, "you wouldn't need some icky category language to talk about that."
So again, I think he has some surface knowledge of type systems but I don't think he understands type systems. I'm only a beginner at this stuff and his errors were difficult to digest. I think if he wanted to put down Maybe and Either he should've come with a loaded weapon.
He's had better talks to be sure! I just don't think this one was very good. And in my estimation was one of the poorer talks this year.
>His point about `Either a b` was perhaps further from the mark. It is not a data type that represent logical disjunction. That's what the logical connective, disjunction, is for. Haskell doesn't have union types to my knowledge. Either is not a connective. It's a BiFunctor. His point that it's not "associative" or "communtative" or what-have-you simply doesn't make sense. In fact he calls Either "malarky" or, charitably, a "misnomer."
I don't agree with Hickey, but isn't there a connection between Either as a basic sum type and logical disjunction via the curry-howard correspondence?
And wouldn't "forall a b. Either a b" be the bifunctor, since it has a product type/product category as it's domain, while the result "Either X Y" (where X and Y are concrete types, not type variables) has the semantics of logical disjunction ie. it represents a type that is of type X or type Y?
Yes, there is a connection. Which makes it all the more strange: Either does correspond as you say which means it has the same properties as the connective. In the correspondence the functor arrow is implication and a pair of functions ‘a -> b’ and ‘b -> a’ is logical equivalence. Using these we can trivially demonstrate the equivalence of Either to the associativity laws using an isomorphism.
That’s what makes his talk strange. He talks about types as sets and seems to expect Either to correspond to set union? If he understood type theory then he’d understand that we use isomorphism and not equality.
You can express type equality in set theory and that is useful and makes sense.
But it doesn’t make sense in his argument.
Malarky? Come on. Doesn’t have associativity? Weird.
> The function `a -> Maybe a` is a different function than `a -> a`. Despite his intuition that the latter provides a stronger guarantee and shouldn't break code, callers may be expecting the Functor instance that the `Maybe` provides and therefore is a breaking change.
I don't really follow that. How can it be a breaking change? Can you give an example?
Where your downstream parsers match on `Nothing` and assume the stream hasn't been consumed in order to try an alternative parser or provide a default.
If you change an equation to use `option` instead you have a completely different parser with different semantics.
I was thinking of a case where I use your function in a combinator that depends on the Functor and Monoid instances provided by the `Maybe` type. If you change your function to return only the `a` and it doesn't provide those instances then you've changed the contract and have broken my code. And I suspect it should be easy to prove the equations are not equivalent.
But protos and Haskell's type system solve different problems.
Perhaps, maybe, public apis like protos shouldn't encode requiredness of any piece of data (I actually fully agree with this).
But that says nothing about whether or not my private/internal method to act on a proto should care.
Another way of putting this is that requiredness ahiuslnt be defined across clients (as in a proto), but defining it within a context makes a lot of sense, and maybe/optional constructs can do that.
Or iow, other peopleay use your schema in interested and unexpected ways. Don't be overly controlling. Your code on the other hand is yours, and controlling it makes more sense. So the arguments that rely on proto-stuff don't really convince me.
(I work at Google with protos that still have required members).
> But protos and Haskell's type system solve different problems.
That's pretty much my point. Hickey is very clear what domains he's talking about, which are similar to the domains that protos are used in -- long-lived, "open world" information systems with many pieces of code operating on the same data.
People saying "Rich Hickey doesn't understand type systems" are missing the point. He's making a specific argument backed up by experience, and they are making a vague one. I don't want to mischaracterize anyone, but often I see nothing more than "always use the strictest types possible because it catches bugs", which is naive.
I agree with your statement about private/internal methods. I would also say that is the "easy" problem. You can change those types whenever you want, so you don't really have to worry about modelling it incorrectly. What Hickey is talking about is situations where you're interfacing with systems you don't control, and you can't upgrade everything at once.