This is one of the reasons why Higher-Kinded Types are such a boon in the languages which have them. It allows you to translate "patterns" into straight-up libraries. This slideshow is good documentation for what the patterns are, but if you go use them in Haskell (e.g.) then you'll start to see that libraries are designed to completely contain that pattern and ensure compatibility between your use of it and others.
Usually this is achieved by representing the pattern exactly in the language. Usually the patterns are just maths. Usually math happily quantifies over higher-kinded types. Thus, you really want HKTs in your language.
And to be fair, Haskell does not go tremendously far in this direction. The languages which will truly profit from this are still in gestation.
Generally what we're talking about is the ability to say something like "for all types X which satisfy a predicate P, we have this code". The ability to quantify over "all" types is a key property of dependently typed languages. Likewise, a lot of Haskell's ability to simulate DT languages comes from its ability to reason about HKTs.
In a DT language you have something called a Pi type which is a bit like a function type. An example would be to compare the types of two functions which, essentially, determine whether two integers are coprime
coprime1 : Int -> Int -> Bool
coprime2 : (a : Int) -> (b : Int) -> Maybe (IsCoPrime a b)
The important part is to look past the syntax and note that IsCoPrime is a type which depends upon the values of the first two arguments. Another way of reading this is as follows
coprime3 : forall (a b : Int), Maybe (IsCoPrime a b)
this emphasizes that the type of `coprime3` is quantifying over all values in Int.
Then we can take this idea to the higher-kinded level by looking at the type of, say, monoids
monoid : forall (A : Type),
{ zero : A
, plus : A -> A -> A
, leftId : forall (a : A), plus zero a = a
, rightId : forall (a : A), plus a zero = a
, assoc : forall (a b c : A) , plus a (plus b c) =
plus (plus a b) c
}
Here we can see that the `forall` notation lets us freely abstract over "all types" just like we previously abstracted over "all values". This continues to work at every level. For instance, we can note that `monoid` above is "a type of types" or maybe a "specification" and we can abstract over all "all specifications" as well
Surely your monoid example should be a Sigma type (`exists`), not a Pi type (`forall`), since a monoid is a specification of data for a single type A, not for all types A. Indeed, the empty type has no monoid structure, so the Pi type you wrote down is necessarily empty.
I quite liked the types section, particularly the bit about making illegal states unrepresentable so you can enforce business rules, eg. validating email addresses.
One of the things I'm always thinking when people talk about the merits of various type systems and the problems they solve is: "But I don't have those problems". However, there are a few good examples in there that opened my eyes a little and I'll give type-centric programming a go on my next project. Not necessarily solving problems that I have, but certainly presenting a different, hopefully clearer, way of writing some functionality.
Excellent presentation. Of the leading type safe functional languages (Haskell, F#, and OCaml) I find F# to be far and away the most accessible in terms of syntax and application.
Writing Scala in my day job currently (which, for the most part, I quite enjoy) but can see jumping ship if Microsoft's move to Linux is successful. Being able to develop and deploy F# applications on Linux with full blown Type Providers and decent IDE support? Pretty compelling combo, and that's just the tip of the iceberg.
Yeah, the idea of F# on Linux is very exciting. Visual studio has a lot to bring to the table, a lot of things that I've missed since I stopped working at an MS shop. I used to do a lot of work with F# and MSSQL, and have missed some of the ease of interopability since switching over.
Of course, I far far far prefer Linux in general terms, but the MS tooling is quite decent. eg. I still haven't found anything that even begins to compare to the MSSQL management studio.
Scala's biggest advantage over other languages for me in terms of reducing bugs, complexity, and pleasantness have been higher kinded types. Given that F# is lacking in this, and they don't have plans to add it, I can't even see them as comparable.
Loving learning #F for educational purpose on my spare time and I am loving it, any idea of which is the market for this language or for what you should be use it?
Besides the industries pjmlp mentioned (finance and insurance/banking), F# is also being used for data science/analytics/machine learning, distributed applications (on Azure and AWS) and bioinformatics. I also know there are at least a few companies building mobile apps and games with F# (e.g., in combination with the Xamarin toolchain).
Do you know if it is possible to use F# on windows without having to install several GB of crap? I'd like to be able to try it out, but the same way I would try any other language, install the compiler and try it out. All I can find is huge packages including visual studio and all kinds of other junk.
These slides explain the usual functional patterns but it seems like a bit of a cheat, because it doesn't really explain how to do anything hard (dealing with time, persistence, logging, and so on).
The comparison between Slides 81 and 82 is particularly unfair because the "object soup" actually does deal with the database, SMTP, and so on and the functional version doesn't. If you add those in, you're going to get something complicated: perhaps a bunch of monad transformers or some such?
Slide 104 is misleading. In an imperative language, you can write a decorator function that logs a function's inputs and outputs, and it will have the same API. In a pure language, you can't do that because a function that does I/O has a different type. The flexibility or sloppiness (depending on your point of view) that allows you to do I/O anywhere is really a feature of imperative languages rather than pure functional languages.
You're mixing up functional and pure. Plenty of "functional" languages let you do IO everywhere.
> The comparison between Slides 81 and 82 is particularly unfair because the "object soup" actually does deal with the database, SMTP, and so on and the functional version doesn't. If you add those in, you're going to get something complicated: perhaps a bunch of monad transformers or some such?
You clearly need state somewhere (something needs to be able to reach the connection pool), so, yes, the comparison is unfair.
I think it's the terminology that's mixed up, actually. Functions are first-class datatypes in many imperative languages, including Python, Go, or JavaScript, but they mostly aren't mathematical functions.
There is no clear-cut definition of what makes a language functional, but popular opinion says that F#, OCaml or Clojure are functional languages, none of which are pure.
You can just turn that exact statement on its head and say that "the ability to recognize and differentiate between different kinds of effects in use is a feature of pure functional languages".
A good read, I especially like the parts about error handling and maps/applicatives. They are well-illustrated and tackle some difficult common problems I had shortly after I started using functional languages.
The part about functors/monads/monoids is also nice, although I feel like it would be better with the accompanying talk to bind it together a bit more.
By the way, I do think pattern matching could have been discussed more extensively. Pattern matching is one of the reasons I love functional programming so much, since combined with algebraic types they can be much more expressive than imperative control-flow statements (e.g. if-else). For programmers that are used to imperative programming, discovering the power of pattern matching can be quite a hurdle.
Ok, I'm sure the slides get better as they go on. But what the hell? The difference between the patterns that are compared and contrasted are... well, tenuous, at best.
Seriously, it gives off the impression not that these are going to be better patterns. But that someone simply has a bias against "patterns" as they are promoted in Java and then rails against it saying that they are going back to the root of the word patterns. Ignoring that this is the same path.
"Functional Programming in Scala" is also well worth a read as well if you're trying to learn FP... I've found it to be excellent, especially since it has lots of exercises to use for practice.
There's a book with the same name plus "... In Scala and Clojure". It is based on the wrong sort of patterns (trying to shoehorn GoF into FP). This presentation is much more what I had hoped that book was going to be.
I am currently taking EdX's Intro to Functional Programming mooc, taught by Erik Meijer (with his crazy shirts), which uses Haskell to teach functional programming concepts.
I am wondering if there are any other good resources for teaching the functional programming paradigms. Anybody care to recommend me some resources?
Also, I mainly work with Ruby and Javascript in my full time job. Currently, in school, I use Java (in the context of Android, which is on Java 6) and Objective-C (iOS programming). If anybody has any resources regarding functional programming and the previously mentioned languages, it would be most appreciated.
This reply is a little late, but there is a series on the (non-free-but-worth-it-for-one month) "Frontend Masters" site. It is called Hardcore Functional Programming (or similar) and goes into Monads, Functors in normal JavaScript.
I'm not the OP, but I am also currently taking the FP101X Mooc with Erik Meijer. So far, I'm enjoying it tremendously. I've signed up for several moocs the past couple years, but have never been able to stick with any. This is the furthest I've been able to actually stay with it, which is probably indicative of something.
The lectures are basically laid out 1:1 with Hutton's Programming in Haskell book, so if you're familiar with that book, you're familiar with the way the course is structured. I find the actual experience of doing all the homework questions very helpful, even if so far nothing has been particularly difficult.
I'm also taking it to help flesh out my Haskell learning. I rather like it.
Be aware it starts out really slow then 50% in ratchets up the fun into "scare away the freshman" territory with monads. I have to say preexisting knowledge has been quite helpful.
About my only complaint is the amount of "which of these 9 implementations are valid" exercises. Although i've been cheating and using hspec to tdd my way through them.
I thank the lord every day that the language I work in (Javascript) supports functions! I try to write pure functions as often as possible, so it's easy to refactor the code when I come back three months later.
But is there, except for Clojurescript, any true (like Haskell) FP language for browsers? Something that has the tools, and community to back it, so it's viable to actually make projects in it?
js_of_ocaml let's you do ocaml in both client and server, and in a single app, with shared variables. http://ocsigen.org/js_of_ocaml/
As it operates on the byte code you can put any ocaml into the browser, even the compiler itself.
Eg: slide 64 encourages the reader to "Using sum vs inheritance", and then goes on to show the "good" algebraic datatypes (no behaviour, just data) vs the "bad"... multiple class implementing the same interface (which I wouldn't call inheritance).
I wanted to use a library to do simple linear algebra, quick and dirty regular expression.
With linear algebra the library HMatrix is broken in more than one place, it also requires me to learn a lot of new types just to do simple matrix decomposition.
Regular expression in haskell requires me to learn a completely new way to do them, I am not under the impression that regular expression is broken in any way in python or javascript that it requires new syntax.
Haskell also does not have access to the browser runtime which is key if you are want maximum exposure for your work - ( no one is interested in binaries anymore ).
There is purescript but its a large layer over javascript and even a simple hello world results in a 2000 LOC compilation of javascript.
I am not a computer programmer, I am an applied Mathematician and haskell requires a lot of work before it can be used for serious work the type of work I do.
Haskell doesnt have the use of use that is offered by things like npm, browserify which are the most awesome tools I have used.
> There is purescript but its a large layer over javascript and even a simple hello world results in a 2000 LOC compilation of javascript.
This really depends on which compilation route you take. `psc-make` will include all of the Prelude and compile to CommonJS modules, which can result in quite a bit of code, but `psc` will trim out all unused functions, and should result in about 10 LOC for Hello World.
> With linear algebra the library HMatrix is broken in more than one place, it also requires me to learn a lot of new types just to do simple matrix decomposition.
What was broken about HMatrix? What library has the best API to you? Algebra[0] from Node?
var algebra = require('algebra');
var M = algebra.MatrixSpace;
var R = algebra.Real;
// Create the space of 2x2 real matrices
var R2x2 = new M(R, 2);
// Create two invertible matrices:
//
// | 1 2 | | -1 0 |
// | 3 4 | and | 0 1 |
//
var m1 = new R2x2.Matrix([1, 2,
3, 4]);
var m2 = new R2x2.Matrix([-1, 0,
0, 1]);
// Multiply m1 by m2 at right side
// | 1 2 | * | -1 0 | = | -1 2 |
// | 3 4 | | 0 1 | | -3 4 |
m1.mul(m2);
console.log(m1.data); // [-1, 2, -3, 4]
// Check out m1 determinant, should be 2 = (-1) * 4 - (-3) * 2
console.log(m1.determinant.data); // 2
> Regular expression in haskell requires me to learn a completely new way to do them, I am not under the impression that regular expression is broken in any way in python or javascript that it requires new syntax.
What new syntax? I'm not sure what you mean by "learn a completely new way to do them".
Haskell (pcre-light[1]):
> import Text.Regex.PCRE.Light
> let numberRgx = compile "[0-9]+" []
> match numberRgx "the answer is 42" []
Just ["42"]
Node:
/[0-9]+/g.exec("the answer is 42")[0];
Haskell (regex-tdfa[2]):
> import Text.Regex.TDFA
> "the answer is 42" =~ "[0-9]+" :: String
"42"
Ahh, I think I see what you mean now. You want the familiar /regex/string syntax right? There used to be a QuasiQuoter (template haskell library) that allowed syntax like:
ghci> [$rx|[0-9]+|] "the answer is 42"
Just ["42"]
However it[3] seems to have bitrotted and everyone else is using the "needle" =~ "match" syntax.
EDIT: Found a useful website comparing language implementations, namely regex in perl vs Haskell:
Most of these slides are about patterns that are provided for as functions by the language, or are otherwise obsoleted by the more powerful abstraction features of FP languages.
The writers of this presentation are very much aware of the smell that is patterns.
16 of 23 patterns have qualitatively simpler
implementation in Lisp or Dylan than in C++
for at least some uses of each pattern
Things like this are literally why I switched from Python to Racket almost immediately once I discovered the latter. So much time spent fighting an imperative language to do something that seemed like it should just be easier in the first place.
Goodness. My apologies for instigating a terminology war, I should not have been so imprecise.
I think tel is mostly on the right track of what I mean when I compare Racket to traditional ALGOL languages, but it is also true to say that Racket is not strictly non-imperative either: it is ultimately a multi-paradigm language, just one that culturally leans harder towards a functional-declarative style. My early projects were just as lousy with mutable state and in-order processing as anything I wrote in Python; but I got better.
I think what really drove me to Lisp and FP languages was the notion of nearly everything being first-class values, from functions to objects, and it's this quality, compared to the tedium of building yet another !"¤&¤&! constructor pattern that really attracted me. This, plus things like macros, the way the language allows for function composition so readily, is what made Lisp feel like 'how programming should've been all along' for me. Haskell as well induced that same reaction in other ways (mmm ... curry ...), though I'm still not entirely convinced that such strict purity is practical.
If my subthread sounded at all like an attack against Lisp and Racket, than I humbly apologize. So far, I am loving the language and what can do with it.
And I like currying and such, as well. Even use it in Racket quote often. Isn't too uncommon to see something like in my projects.
((compose
(curry func1 val1)
func2
func3) seed)
I think what really makes Racket so amazing to me is just how easy it is to iteratively write a function. Start with a core, and gradually append around it more and more till it is done. Testing with values throughout the whole process.
I also like the literature around Lisp. Land of Lisp and SICP are both very well written and just plain fun to read.
Yes. I find sometimes that programming in Racket and other Lisps is so efficient, that whenever something starts looking tedious, I start reconsidering if there's a more elegant way I'm missing to implement it. Learning it, hacking it, and reading about it (and Scheme and other Lisps as well as Haskell) have done marvelous things for my programming skills.
Today I wrote a struct system in 29 lines of code. Kinda hard sometimes to do that and then even think about going back to, well, much of anything but another Lisp or composable FP language, really.
Racket is "mostly functional" which means that very few "impure" data structures (mcons, for example) and operations are marked explicitly (by convention).
That does nothing to talk about imperative versus not, though. Does it? The entire for* and let* class of bindings are imperative, in that the order matters. Same with (begin and friends.
Imperative requires order but they are not identical. Imperative tends to mean "sequence of instructions which operate on an exterior set of mutable state cells" versus functional which often means "set of expressions which reduce into values". Each can model the other, but let is clearly functional in this model since its meaning derives entirely from a reduction semantics.
See my link down thread. I think this really comes down to the murkiness in definitions of declarative and imperative. I did find a later definition that points out that these are no longer duals of each other.
I was actively avoiding the obviously imperative functions, though. Not sure why. That is, I could have simply said that the existence of all of the functions that end in exclamation points shows the language is imperative.
The imperativeness of Racket really doesn't come so much from the *! functions, but instead things like `begin` which explicitly sequence program statements in such a way that only allows them to interact via external mutable state.
Yeah, I chose a somewhat poor example. My main point was that order matters, you have to do A before B. You can not do them at the same time, or something different happens. (Contrasted with a language like VHDL where everything typically happens at the same time.)
Simply put, functional is not necessarily the opposite of imperative.
I took to googling to see if someone had made my point better already. The best I found quickly is this[1]. It is a good read. As it points out that we are most likely both stuck on different numbers in the "opposite of imperative" definition world.
I will say that throwing out mutability with the bathwater is perhaps the single most frustrating thing with most functional advocacy.
Usually this is achieved by representing the pattern exactly in the language. Usually the patterns are just maths. Usually math happily quantifies over higher-kinded types. Thus, you really want HKTs in your language.
And to be fair, Haskell does not go tremendously far in this direction. The languages which will truly profit from this are still in gestation.