Aside: I am maybe 50-60 hours into my first haskell project using yesod. I have read about half of "real world haskell" and about half of "learn you a haskell".
Other than this, all of my experience is in iterative languages. (c++,python,php).
One thing that comes up is "if your program compiles it's almost certain to be correct"; and it has been true again and again for me. It's an odd, eery feeling that honestly I haven't quite groked how it is happening that way.
I don't understand completley what I am doing, a combination of not thinking like a FPer, not knowing the idioms of Yesod, and not being entirely familiar with the syntax of Haskell; so I am sometimes stuck on "how" to get the type that I need.
So I will sometimes try a couple things almost randomly; then it will compile and it basically is always doing what I want it to do.
I think the "if it compiles, it works" has to do with Haskell's type system being much more expressive than most static languages, and that most of your code will be declarative and pure.
Purity helps because your program will necessarily be composed of small, self-contained modules (usually functions) that have no implicit dependencies.
The type system helps because you can encode a lot of the code's intent in type signatures, which prevents code that does not match that intent from compiling.
Monads are a good example. Whenever there is some special execution requirement (eg. ordering for IO) for operations, you may be able to express the requirement as a monad, and then any operation outside that monad can't accidentally mix with it.
In addition, because monads allow you to enforce how operations are composed, there is no way for the programmer to forget to adapt the output from operation 1 so that it works as an input for operation 2. That's abstracted by "bind".
I'm a Haskell newbie myself, but there are a lot of mind-bending tidbits around the Internet in how to encode intent in various ways using the type system... It's not the most powerful system imaginable for that purpose, but as far as practical languages go, it's probably at the top.
Agreed; after getting used to working with Haskell's type system, I've found it kind of painful to use languages with weaker, less expressive type systems (which unfortunately, is basically every practical language other than Haskell). The ability to transform all manner of bugs into compile-time errors via the typechecker is indispensable. Once you start digging into GHC extensions like GADTs and type families, it gets even better. I never feel nearly as confident about the correctness of code I write in, say, Python, even when using a full test suite.
When I know what I'm programming, I agree, but I tend to find strong static type systems get in my way for exploratory programming. If I'm trying out some music-synthesis idea, for example, the #1 thing I want is to hear some sound ASAP, even if the code only runs for certain choices of parameters and crashes after 30 seconds. Then, I'll fix it later if it sounds promising.
I find that in Lisp, for example, I'm able to do partial, tentative refactorings, where I sort-of change something as an experiment, just enough that it works in one example case, but don't completely refactor everything to be consistent with the change yet, because I'm not sure if I really want the change or not. That kind of thing is really hard to push through in languages that care more about static types, and I often find myself thinking something along the lines of: yes I know that function there isn't updated with the new type signature, but I'm only planning to try this out initially with one choice of parameters, and I know that in that case the function isn't even called, so please just run the damn program!
The best exploratory programming I've seen in haskell also does this for types to some extent; you know you're writing music, so you start with something like:
data Music -- stubbed in data type with no constructor
Then maybe you decide it involves a series of notes with duration, so you change it:
data Music = [(Note, Duration)]
data Note -- stub
type Duration = Int
> which unfortunately, is basically every practical language other than Haskell
Is this also true of Ocaml? I have experience in neither, but my understanding from reading up on this says Ocaml and Haskell are roughly equivalent, and if anything - Ocaml is even more expressive.
Anyone with real world experience who can comment on this?
It has been proven that you can express certain things with ML-style functors that are difficult or impossible to express with standard Haskell-style type classes. But almost everybody uses GHC and turns on lots of type system extensions in their projects, which is going to make the issue a lot murkier, and on top of that ML-style functors require a bit more finger work and explicitness. Ultimately, it's going to come down to preference.
They're roughly equivalent the way Python and Ruby are roughly equivalent, or Common Lisp and Scheme. From any other linguistic paradigm, yeah, they're extremely close. But from one to the other, it feels like there is a tremendous gulf.
A lot of the benefits tmhedberg is talking about come from I/O being a monad in Haskell and the lack of "assignables," which constrains your ability to mix side-effects and state changes with pure code. Of course, you can defeat it if you really try (MVars and unsafePerformIO) but it's not the kind of thing you would do accidentally. And you could achieve a lot of the same benefits in OCaml by writing monadic standard libraries and never using refs.
So, if by "expressive" you mean the type system allows you to express complex relationships, Haskell is more expressive. But if by "expressive" you mean the language allows you to express calculations in more ways, OCaml is more expressive. In my experience, "expressiveness" is usually a foil for some other attribute and doesn't usually illuminate the debate much.
In the end, they're just tools, albeit with overlapping purposes. OCaml is easier to learn and compiles faster, Haskell raises your confidence in the code and has cooler tricks. Both are fascinating and stunningly better than lots of other options. If I had to choose which one to learn today, I'd look at some example code in both and ask myself which one I want to be staring at for the next six months.
I can't answer you with any real authority, since I haven't used OCaml. But from what I understand, it lacks type classes, which are absolutely central to the richness of Haskell's type system. Whether or not it makes up for this in other ways, I can't say.
OCaml is also strict by default, with optional laziness, whereas Haskell is the opposite. This could be seen as either an advantage or a disadvantage, but either way, it is a very fundamental difference.
If you are considering learning one or the other, I'd advocate for Haskell purely on the basis of it being the closest to mainstream of all the static functional languages. There is a huge variety of libraries available on Hackage, and the community is very active, welcoming, and friendly. Haskell (GHC in particular) is also the laboratory in which the majority of interesting FP research is currently being done, if you care about that at all.
Or just learn both languages, so you can compare the benefits and disadvantages for yourself. :)
The main difference is the lack of purity, which means that the side effects of a function aren't part of a function's type. It also doesn't have the wide variety of extensions which you can use to do really crazy stuff (like verifiably correct red-black trees, https://github.com/yairchu/red-black-tree/blob/master/RedBla...).
A full dependent type system is nominally “the most powerful system imaginable”, but it’s also hopelessly problematic. You can’t be sure that type inference or type checking will halt in the general case, although you can infer some value-dependent types (see “Dependent Type Inference with Interpolants” by Unno & Kobayashi).
In practice this means you need manifest type signatures, otherwise the type of a function would be “lol iunno ¯\(°_o)/¯”. Then again, Haskell requires signatures in some cases, such as for Rank-N types, or when you run up against the Dread Monomorphism Restriction. So yeah, Agda and Coq are awesome, but most useful as what they’re designed to be—proof assistants—rather than general-purpose languages.
tl;dr: A constrained system such as Haskell’s, with extensions for aspects of dependent typing, is more practical than actual dependent typing.
I'm in pretty much the same boat. I feel kinda dirty playing with the type signatures to see what compiles, but if it compiles in Haskell it probably works.
Other than this, all of my experience is in iterative languages. (c++,python,php).
One thing that comes up is "if your program compiles it's almost certain to be correct"; and it has been true again and again for me. It's an odd, eery feeling that honestly I haven't quite groked how it is happening that way.
I don't understand completley what I am doing, a combination of not thinking like a FPer, not knowing the idioms of Yesod, and not being entirely familiar with the syntax of Haskell; so I am sometimes stuck on "how" to get the type that I need.
So I will sometimes try a couple things almost randomly; then it will compile and it basically is always doing what I want it to do.