Hacker News new | past | comments | ask | show | jobs | submit login

Dependent types are the big win for me as well, even though I'm pretty inexperienced with both Haskell and Idris. I've heard LiquidHaskell thrown around when people talk about dependent types, have you looked into that before?



As far as I'm aware, LiquidTypes focus on very simple use cases, like length-indexed Vector types and such.

I don't think it can manage "full" dependent types, for example situations I often find myself in where I want to calculate a type from incoming data, e.g. something like:

    parseJSON : String -> Maybe JSON
    parseJSON = ...

    parseMyData : JSON -> Maybe MyData
    parseMyData = ...

    HasSomeProperty : MyData -> Type
    HasSomeProperty x = case x of
      MyCase1 -> ...
      ...

    someProcessing : (x : MyData) -> HasSomeProperty x -> ResultOfProcessing x
    someProcessing x p = ...
Often, we write "HasSomeProperty" such that it performs some arbitrary calculation to decide between returning the unit type ("does have this property") or returning the empty type ("doesn't have the property, or can't tell"). Whenever we need to provide a value of type "HasSomeProperty x", we just put a unit value and the type-checker will enforce that our arbitrary calculation resulted in a unit type. We can then use that property to access a bunch of functionality which the compiler would otherwise forbid us from using.

This sort of code is trivial in a dynamically typed language: there's no requirement to prove anything, so we don't need any "HasSomeProperty x" values at all: we just go ahead and use whatever code we like, and hope that we didn't mess something up.

We hit problems when we want to write our program in a language like Haskell with a (relatively) weak type system: the type system quite rightly forbids us from writing the unsafe version that dynamic languages would accept, but the language isn't expressive enough for us to (easily) encode the proofs required to convince the type checker that what we're doing is safe.


>This sort of code is trivial in a dynamically typed language: there's no requirement to prove anything ... we just go ahead and use whatever code we like, and hope that we didn't mess something up. >the [Haskell] type system quite rightly forbids us from writing the unsafe version that dynamic languages would accept

As somebody with familiarity with Haskell in the small but not in large applications, is it not similarly trivial to write in Haskell with the same level of safety that dynamic languages give you (i.e. explosion at runtime)?

Consider as an example the following safe lookup function (standard Haskell)

  lookup :: Eq a => a -> [(a, b)] -> Maybe b
  lookup _ [] = Nothing
  lookup x ((a, b):abs)
    | a == x = Just b
    | otherwise = lookup x abs
and the following unsafe lookup function

  lookup :: Eq a => a -> [(a, b)] -> b
  lookup x ((a, b):abs)
    | a == x = b
    | otherwise = lookup x abs
How is the latter any worse than lookup in a dynamically-typed language where you are assuming the lookup will succeed (and hence don't write any error handling logic)? I know this sort of code is frowned upon in Haskell, but it can be done.


> How is the latter any worse than lookup in a dynamically-typed language

Presumably you mean "better", not "worse". Well, it's better because you will (or should) get a compiler warning (or ideally an error) about the latter version.


I agree it's better, you will get a compiler warning about a partial function. I did actually mean worse though. The GP at least to me implied that Haskell (for cases like this) sits in the lowlands between Idris (and other dependently-typed languages) and dynamically-typed languages. Idris deals with the problem "the right way" by helping you guarantee that you deal with dynamic objects in a compile-time-guaranteed safe way, at the cost of some hassle of proving it is. Dynamically-typed languages deal with the problem "the practical way" by letting you pretend its safe so you don't have to go through the hassle of proving it is, at the cost of runtime safety. I think Haskell can also let you do this, it just feels wrong to do it as a Haskell programmer.


You're absolutely right that various hacks can be employed in Haskell; as well as missing branches, we can also exploit laziness to provide "undefined" or "loop = loop" expressions which, if we're right, won't ever be evaluated.

I didn't mean to imply that dynamic languages are "better" because they allow unchecked combinations of everything with everything else. Although I also didn't mean to imply that they're strictly "worse", precisely because there are things we can express easily in dynamic languages which take effort to reproduce in something like Haskell.

My main reason to mention dynamic languages was as motivation for why you might want to write such code in the first place: using dependent types to, say, prove Euclid's theorem of the infinitude of primes, would probably not motivate many software developers to give Idris a try. Examples like the type-safe printf mentioned in a sibling comment would presumably be much better motivators: we compute a type based on the format string, so providing a string like "Hello world %f %s !" will produce a type `Float -> String -> String` which accepts the required Float and String parameters, then returns the resulting String.


Thanks for clarifying.

Off-topic, but related to undefined in Haskell, I love holes in Idris as a way to represent incomplete programs. I'd love if it somebody could take them one step further and have it so that if a hole is evaluated at runtime, a prompt came up and said "Well this is the input, what do you think the output should be?". You type it in, the program keeps going and a unit test is generated and put away somewhere for you to make sure that later when you implement it, you can help verify that you're doing it right.


> I'd love if it somebody could take them one step further and have it so that if a hole is evaluated at runtime, a prompt came up and said "Well this is the input, what do you think the output should be?"

For some cases, this is pretty simple to implement with unsafePerformIO - I've done that once or twice. Not a sin at all in development.


I hadn't through of doing that, good idea.


Heh, nice idea. Seems a long way off though. GHC semi-recently took a baby step in this direction by adding PartialTypeSignatures, although it emits a warning if you actually use them!

This makes sense on its own, but seems a bit silly when we consider that completely removing the signature will fix the warning ;)


EDIT: ... or maybe I just misunderstood your example. If MyData is a statically known/defined type, then you can obviously accesss it however you want... but then I fail to see the value of HasSomeProperty returning a Type (given the reasoning below, you'd still have to have some proof object giving you the things you're interested in --- so that you wouldn't have to 'prove' the structure of the data obeys the property again.).

I don't understand how you could possibly write any non-trivial (e.g. just unit-returning) code in the body of "someProcessing" with having some sort of proof object (generated by HasSomeProperty) with which to extract whatever you'd be interested in. As the example is written you'd only have a Type, but I don't see how that's enough.

Can you explain or perhaps give an example of what a "someProcessing" function body might look like?


Thanks for the writeup, broke it down nicely


Richard A. Eisenberg has a nice comparison of Dependent Haskell with Idris and Liquid Haskell in his PhD. thesis[1]. There is a difference between refinement types and dependent types. In Liquid Haskell you basically state some predicates and then let the SMT solver figure out whether they are true. With Dependent Types you have to do all the legwork, but they are also more powerful.

[1] https://github.com/goldfirere/thesis


It's also important to note that you can embed SMT-driven refinement types into a fully dependently typed language; that's what F* does. I wonder how feasible doing something similar in Idris via the elaborator reflection and type provider faculties is. I remember Brady did something following a similar principle in one of the elaborator reflection papers with automated proofs of termination, but without calling into an external SMT solver.


Thank you for the recommendation




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: