Please note this homepage is NOT final and it's going to see revisions before we push it out to the actual website, including many tweaks to the content and probably some styling tweaks too.
There are a lot of other things we still need to do as well, like ensure all redirects and subpages work properly.
Source: I'm one of the Haskell.org administrators, and we pushed this out only today.
I love the tutorial! I couldn't help writing down some notes while going through it, here they are if you're interested in reading:
- Really would love to return to previous steps to review things (for example, reviewing what the difference is between a list and a tuple)
- "You just passed the (+1) function to the map function." But didn't I also pass [1..5] to it too? It's unclear to me whether map is a function which takes two arguments ... or if the first part "map (+1)" evaluates to a function which acts on [1..5]
- Is a (+1) a tuple with one item (the function "+1") or is it just similar syntax?
- Really like the exercises and would love to have more challenges interspersing the intake of new content. It honestly feels like the best way to learn.
I'm going to sleep for the night, otherwise I'd keep going. I'll return to it later. Looks promising!
P.S. Mini-bug report: Somehow this happened and it really confused me https://i.cloudup.com/0wRJS7FZls.png - looks like my 'try it again anyways' strategy saved me here. (there were no js console errors).
Not sure if you would like to know some answers to your questions, here are they anyway:
- It's unclear to me whether map is a function which takes two arguments ... or if the first part "map (+1)" evaluates to a function which acts on [1..5]
The latter is true. In Haskell you can pass arguments to functions one at a time each time the result could be a function that takes another argument. You might even say that Haskell functions always take one argument, and that Haskell's function syntax is just a bit of sugaring. The process of returning a function that takes the second argument is called a 'curry' after the man Haskell Curry.
- Is a (+1) a tuple with one item (the function "+1") or is it just similar syntax?
A tuple of 1 is just an expression. The ('s are just to determine expression scope, it's the commas that would make it a tuple. A tuple of 2 would just be 2 expressions bundled together, so I don't think there's a meaningful difference :P
Nah, Haskell has no functions with zero arguments.
You could say that a value is a function that takes no arguments, but then you lose the distinction between functions and other values, and more importantly, the distinction between function types and other types! For example, types like Bool have decidable equality, but function types in general have undecidable equality, so you really don't want to be caught saying that Bool is a function type. It's much more sensible to say a type is a function type iff it was constructed by the type constructor ->, which takes two types, the argument type and the return type. In other words, all functions take one argument and return one result.
You could also say that a value is an unevaluated thunk that takes no arguments, but that's not always true. The thunk may be already evaluated and replaced with a value, without affecting the visible type. It's better to keep the idea of "thunk" separate from the idea of "function", because a value could be both, either, or none.
You could also say that every value x can be converted to and from a function () -> x. But that's not a function with no arguments, that's a function with one argument of type "unit" (the type with a single member, a.k.a. the empty tuple).
The above might sound pedantic, but for some reason it's fascinating to me to think about such things. Language design is a kind of addiction and Haskell is like a drug, because its original purpose was to be a laboratory for programming language research, which you can see in the million GHC extensions.
In Haskell, all functions take one argument. Also, operators are functions.
First, the one argument thing is known as currying. With higher-order functions, returning a function is admissible, so
map :: (a -> b) -> [a] -> [b]
can also be thought of as:
map :: (a -> b) -> ([a] -> [b])
like so:
ghci> :t map length
map length :: [[a]] -> [Int]
As for operators being functions, function names that are alphanumeric default to prefix (e.g. f 10) while those that are symbolic default to infix (e.g. 10 + 18). To use an alphanumeric name in infix, use backticks, e.g.:
ghci> 5 `max` 6
6
Use parentheses to prefix an "operator"-style name, like so:
ghci> (+) 5 6
11
The (+1) that you're meeting is an additional bit of syntactic sugar. It's identical to (\x -> x + 1). It allows you to do things like this:
Is there a practical reason why there's so much empty space with this new design, and why so little valuable content and functionality is visible by default?
Viewing the existing site in a desktop browser, I get to see the description of Haskell, and a bunch of useful links about learning it, downloading an implementation, using it, and participating in the community. Recent news items and upcoming events are also visible, as is the search field.
This new design lacks pretty much all of that. Instead of useful content, links and functionality, all I'm seeing are large areas of purple and white, and an extremely blurry photo, along with content of very limited value.
I just can't see this new design being beneficial in any way. It makes it much harder to get useful information about Haskell, which seems very contradictory to what a website like this should be doing.
Yes - the practical reason is to focus on the newcomer's experience. Information overload is actively harmful to someone coming to haskell.org to learn about the language from scratch.
Omitting content that is primarily valuable to experienced Haskellers is a feature, not a bug.
This sounds like a dangerous trade off to be making. It's reminiscent of what we saw with GNOME 3, or even Windows 8. The experience is made much worse for existing users, in a vain attempt to "simplify" the design to allegedly appeal to new users who may not even really exist in practice. It's obvious now that it didn't work well in those cases, and I don't see why this case would be any different.
As an occasional Haskell user, I'm served much, much better by the existing site than by this new design. The existing one lets me get to the information I'm looking for with minimal effort. This new site denies me that accessibility, I'm afraid to say.
I don't see the point in trying to attract new users if doing so also means harming the experiences of established users. Drawing in new users becomes pointless if retention starts to suffer.
This case is different because it's an introductory page rather than a tool intended for heavy daily use, like GNOME or Windows.
Newbies are going to head to haskell.org when they want to learn about the language, and it's sensible for them to be greeted by a pleasant introduction to the language.
As "an occasional Haskell user," you aren't the target audience for the haskell.org landing page. :) If you're visiting on a regular basis, there's no reason your needs couldn't be met by some page other than the landing page.
"This sounds like a dangerous trade off to be making. It's reminiscent of what we saw with GNOME 3, or even Windows 8."
It's also reminiscent of what we saw with the iPhone, and nobody would say that the experience was much worse for users of previous smartphones. There are times when simplifying and giving structure to existing content just makes sense.
I'm curious, what content do you miss from the old page that can't be found in the Community or Documentation sections in the new design?
I'd like to see the Documentation, Community and News content on the front page, rather than hidden away on those separate pages. The new design adds an extra, unnecessary level of indirection in order to get to this useful information.
Fair enough, but as other posters have noted there's no reason why those contents need to be placed directly at the home page; they could be located at an inner page that you'd bookmark for reference.
The purpose of a good landing page is not to serve as an index for all the content (that's what site maps are for), but to explain the concept and structure of the site to someone that haven't seen it before. The new page is much better in that respect.
I miss a link to the current Haskell wiki in the new design, but I definitely wouldn't expect to find a list of all the wiki pages on the land page, but available under the Documents section.
I don't want to deal with numerous bookmarks to internal pages, or site maps, or any crap like that.
I want to be able to type "haskell.org" into any browser, and from there be able to quickly get to the standard library documentation, to the language spec, to Planet Haskell, to the downloads, and so on, without having to dig through subpages of subpages of subpages, and without having to scroll.
The Rust website at http://www.rust-lang.org/ is a good example of how a programming language home page should be laid out. There are many relevant links at the top. I can almost always find what I want within the first inch or two of the page. Yet it still shows all of the marketing junk for those who want that stuff, but it's placed well below the useful content.
The new Haskell design is the complete opposite of that. It puts a lot of useless junk front and center, and almost totally discards everything that actually is useful.
i don't think you can speak for all gnome 3 users. I'm the happiest i've ever been with gnome 3. I don't have a bunch of options and other knobs to twiddle. It just works and stays out of the way.
The real reason is that one of the standard bootstrap layout examples was used. The 'bootstrapped' web today is extraordinarily boring. It is the same big banner followed by striped two column grid again and again and again.
The existing design seems to do a good job of using the space available to convey a large amount of important and relevant information, with very little effort required from the viewer.
For example, a viewer of the existing site can quickly get to various types of documentation right away, without having to click on a link to visit a dedicated "Documentation" page, like with the new design. The same goes for news and community details, as well. An inefficient, unnecessary extra link click or press is now needed to get to very core information.
It isn't worth making an informational website look "pretty" if that means ruining its ability to effectively convey information.
Upvoted you. Even I find the new design to be terrible. One more point - the old webpage seems to be "simple" HTML - which loads properly even in very bad internet connections and old browsers like Hacker News. The new design seems to include video links in the homepage, which is always a bad idea, however "modern" you want it to be.
Also, the old homepage immediately gives a sense that Haskell is mature - with separate links about the language, books, libraries, IDEs, etc. The new webpage does not convey anything at all.
Would you need some help in curating a better set of videos?
I think that having some topics this advanced on the home page might scare of users, especially given their visual priority on the page.
Also, what's the plan on the view examples links on the page? Currently these don't link anywhere. Are you looking for existing articles that exist online to link to, or are you looking to have some content written that can be hosted on haskell.org.
I liked that the videos showed a glimpse into the active community around Haskell -- they are not just a fixed set of tutorials for newcomers, but show the variety of talks that Haskell programmers around the world give (including some deeper topics).
I'd hope there's some way to keep that perpetually current -- for it to include recent talks, kept up to date.
I'm a bit late, but perhaps you'll see this. Can you catch Ctrl+W in the REPL and use it to delete the prior word, instead of closing the tab? Kind of jarring to get into the groove of using a REPL and then accidentally close the tab.
I'm more interested in the content, though. I was just trying here and there over the last week or so to learn some Haskell - people always seem to be raving about how cool it is. I found my way to the CIS 194 class link, the first one on the page, and I find that it really isn't very good for learning.
I went through the first page, trying to run some of the stuff. I find that the syntax of the ghci REPL seems to have almost nothing to do with the syntax shown so far. Apparently, I have to use let for every variable, which was never mentioned, and there are lots of funny tricks for how to use multi-line stuff. I still have no clue how to actually write something, compile it, and have some sort of output in the main ghc compiler. Is there a tutorial anywhere that actually helps you learn the language through doing useful things, instead of throwing a bunch of syntax that you can't do anything with at you?
Speaking of tutorials, the old page links to like 500 of them or so. I have no clue how to figure out which one to start with. If you're re-doing it, please link no more than 3 learning pages, no matter what. Have the rest in some buried page somewhere if you must, but keep the front page manageable. How is anybody supposed to make sense of however many tutorials and courses you guys have linked on the old one? I gotta figure I'm the prime audience for this stuff, but I have no idea where to start on your current page.
While I'm at it, can anybody recommend something like a step-by-step thing for people who are proficient in several other languages, but have never seen any Haskell before? Like where every little step has something you can actually type in, run somewhere, and get some sort of output?
You could try IHaskell[0] for playing around with Haskell. It is like GHCi, but provides a notebook interface where you can enter multiline expressions and do declarations, so it is a lot closer to real Haskell. (It can be a bit of a paint o install though.)
It sounds like you're looking for Learn You A Haskell. Great book, free too read online, tells you how to start with Haskell step by step. http://learnyouahaskell.com/chapters
I've been reading "Learn You a Haskell for Great Good" and it has been going much more smoothly than it has apparently been going for you. It addresses pretty much all of your complaints.
I suggest you enable the `+m` flag for multiline input
This way, you don't have to use the weird curly braces trick (which I always forget), at the cost of having to type one more enter when entering a oneliner
I've found that, when it comes to learning a new language from scratch, google and stackoverflow just aren't very helpful. Once you have the basics down, they're good for picking up extra bits and pieces, but at the beginning, when you have no idea where to even start, I find I need something a little more organized.
I too dislike tutorials focused around the REPL, because it's rarely how you write code in practice. The REPL is more of a debugging tool for quickly testing and manipulating code you've already written.
I'd jump directly into creating code files/modules - begin by creating simple programs using `main` as you would in other languages. Create a file with ".hs" extension, (Haskell filenames start with capital letters by convention, since they share the name of the module)
main :: IO ()
main = putStrLn "Hello World!"
Make sure the GHC bianries are in your PATH, and use the program "runhaskell File.hs" to quickly compile and execute them. Also, from ghci you can use the command (:load File.hs), if you want to debug stuff.
The preferred tool for writing Haskell though is emacs. If you're not familiar with emacs, its also a great opportunity to learn. Grab emacs 24, and open the file ~/.emacs, paste in this snippet:
Then hit M-x (M for Meta, which is typically Alt), and type eval-buffer into the minibuffer at the bottom.
If no errors, hit M-x and type into the mini-buffer:
package-install haskell-mode
You can add the following to the .emacs file and use `eval-buffer` again. Also save with C-x C-s.
Next step is to just open an empty haskell file. Create a new directory for your project with `M-x make-directory`, and create a new haskell file using C-x C-f. Add some code, and hit C-c C-l (load in ghci), and emacs will pop open a new window, launch ghci and load your code into it.
You can switch over to the ghci window with C-x o (cycle windows), and type some code into it for testing. (Note it can be quite buggy if the cursor is not in the right place - there needs to be a space after the prompt > ¦). Useful keyboard shortcuts to remember here are M-p (previous) and M-n to cycle through previously used commands. (You can rebind any keys to how you wish.)
You can also add typicall IDE features like "run without debugging" and such with simple snippets of elisp. Say if we want to bind F6 to this command, we can add this hack to .emacs, save and eval the expression.
Now with an active haskell file, F6 will launch like an application, bypassing GHCI - and emacs will typically open a new window (or replace an existing one) with the stdout/stderror for the application.
That'll get you started anyway. From there, using LYAH is a good beginner resource, although it won't get you much further than past the syntax and semantics of the core language. It doesn't cover the plethora of language extensions, building projects, using cabal, haddock, quickcheck and more. I'm not aware of a single resource which covers these though - I just learned what was needed as and when by searching - the best information typically comes from one-off blogs focused on a specific problem.
> I too dislike tutorials focused around the REPL, because it's rarely how you write code in practice....
> The preferred tool for writing Haskell though is emacs...
I understand, but if your goal is to pull in new Haskell programmers you want to grab them as quickly as possible. You want them to try something immediately, not force them to invest in a toolset from the get-go.
I wasn't interested in Haskell until I used the REPL. It allowed me to get immediate feedback on how Haskell works. The tutorial got me engaged quickly.
> The preferred tool for writing Haskell though is emacs.
That's news to me. Emacs is a popular editor for Haskell and in general, but I don't see any reason to say it's "preferred". Unlike, say, as it is with Agda or Coq. I use Sublime Text and Atom mostly (although occasionally emacs), and it's perfectly happy. Then again, all I really ask for from an editor is syntax highlighting, basic tab completion and sensible automatic indentation. But for those who want more, there's the SublimeHaskell plugin, although I've had mixed results with it.
Thanks for the help, though Vim is already my choice for insanely-powerful text editor with 2k+ page user manual :) I tried emacs, but the dependence on the control key for everything makes it kinda painful on every keyboard I have. I suppose I could probably remap a bunch of keys or something, but Vim seems to work much better in the default configuration. I know this tends to be a big flamewar subject, so it is what it is.
Anyways, your Hello World and runhaskell are much more along the lines of what I'm looking for. Also checking out the other people's suggested Learn you a Haskell.
> Also checking out the other people's suggested Learn you a Haskell.
Just to get the minority opinion out there, I found the style of Learn You a Haskell to be insufferable. Real World Haskell is available free online[1] and has a straightforward tone of voice.
This http://dev.stephendiehl.com/hask/ overview is a little bit more advanced. It gives a good overview over the tools, some of the ecosystem and important libraries.
Please, get rid off the primes example, as it is horrible inefficient (in the sense of, "Ok, let's find the first n primes by a simple well-known algorithm, like the Sieve of Eratosthenes") and a simple (non-pure) array-based approach will kick its ass.
Such a toy-example just contributes to the wrong belief that Haskell is just useful in academics or teaching. Some time ago I did implement the sieve in several languages and I also considered Haskell but set for an unpure approach (IOUArray) and it was extremely fast (getting close to C/C++ and outperforming Go, Java, etc.). Obviously such an example is not good to present Haskell's novel ideas but this toy example is worse then no example.
edit: Sorry, did not want to appear nit-picky, I like the language and the new design.
> Please, get rid off the primes example, as it is horrible inefficient (in the sense of, "Ok, let's find the first n primes by a simple well-known algorithm, like the Sieve of Eratosthenes") and a simple (non-pure) array-based approach will kick its ass.
The primes example may be a bit awkward for people with no background in math but it's also an excellent example to demonstrate lazy evaluation in Haskell.
The interesting bit is that primes is an infinite list, containing all prime numbers. Don't try to print the whole list or count the sum unless you have a computer with infinite memory :)
Even though it's a good example, I think extra care should be taken to avoid the stereotype that Haskell is elegant for math and academic stuff but not for the real word.
It bugs me a bit that the example code is brute force trial division instead of a true prime sieve. Sure, it highlights lazyness but the algorithm is less efficient.
that's just nonsense. there are many people very concerned about performance.
i guess you are right that some do not care about constant factors, but i am very sure most do care about algorithmic complexity. and thinking about it, that's the approach you should use for anything where every last bit of performance is not utterly needed.
Well it's the haskell.org landing page, of course they're showing off the type of code that Haskell is good at. In your link, the purely functional version (15 lines around a priority queue) is more complicated than a naive version with a mutable array, so why would they advertise it?
I personally think the priority queue one is somewhat simpler than a fixed array algorithm. It also produces an infinite list of primes (or generator) instead of the primes up to a particular number. This is especially important in Haskell.
I'd love to see it as it shows off some great data structures available in the libraries (like priority queues) but it's getting longish.
The mutable array version is basically the same as imperative code everywhere
sieveUA :: Int -> UArray Int Bool
sieveUA top = runSTUArray $ do
let m = (top-1) `div` 2
r = floor . sqrt $ fromIntegral top + 1
sieve <- newArray (1,m) True
forM_ [1..r `div` 2] $ \i -> do
isPrime <- readArray sieve i
when isPrime $ do
forM_ [2*i*(i+1), 2*i*(i+2)+1..m] $ \j -> do
writeArray sieve j False
return sieve
It's a bit noisy syntactically and doesn't show off as much interesting stuff.
Neat! A nitpick: I haven't used Haskell, so I'm trying to read the prime sieve example in the corner, but there's very little contrast between the background and the nonalphanumeric characters. Some brighter syntax highlighting would be a better choice against that dark background.
Finding primes is something that is taught in the first programming class in Indian high schools. I guess I've never thought of it as something hard.
I looked at nodejs.org, and their first example is a web server! Python has the Fibonacci as it's second example (the first one show's numeric operations).
Ruby does simple string operations on it's home page. While I think that is indeed simpler, it's a little nuanced in Haskell. Depending on how you're doing it you'll need to Map toUpper from Data.Char or use toUpper from Data.Text, and I don't think it's a good first impression have something that uses Data.Text, and the OverloadedStrings extension.
PS - Although, I do agree with some other commentors here that this code isn't actually the Sieve of Eratosthenes, and is far more inefficient, and that is a valid reason to replace that example.
* "Find the Nth Fibonacci Number" is among the most universally known programming tasks, so visitors are far more likely to immediately pick up the example than they are with sieve.
* It shows off a bit of Haskell syntax that (A) can be learned just by looking at an example like this, (B) has a clear benefit to readability that any programmer can appreciate, and (C) is a syntax not found in most mainstream languages.
* The visitor needs no functional programming experience to follow it; it doesn't even use any higher-order functions! This is important, as many visitors will be completely new to FP, and an example that they can't follow is not going to be effective at encouraging them to continue reading.
That's a horrible algorithm though, it takes exponential time. Any good implementation of Fibonacci numbers would be logarithmic in the number of arithmetic operations and polynomial in overall running time (because the numbers get bigger). Here's a good implementation in Haskell: http://nayuki.eigenstate.org/res/fast-fibonacci-algorithms/f... , and the same in Python: http://nayuki.eigenstate.org/res/fast-fibonacci-algorithms/f... . BTW, I think even functional programmers would find the Python code slightly easier to follow :-)
Remember, the target audience is newcomers to Haskell. The purpose of a code sample is to give them a glimpse into the syntax of the language, which should be understandable enough that it piques their interest to learn more.
It would be to the detriment of haskell.org to showcase a superior algorithm that is harder for newcomers to follow.
Maybe it's because I've been using Haskell exclusively for a few months, but I find the Haskell example more clear. This surprises me because I have much more experience with Python.
True, but remember that, say, a Rubyist arriving at haskell.org has no knowledge of:
* what zipWith does
* that (+) is a function being passed, not an operator being invoked
* cons syntax for 1 : 1 : ...
* how this could terminate (having no preexisting knowledge of laziness, remember)
A better goal than "show off all the features of Haskell" is "show a Haskell example that's understandable, demonstrates a clear benefit, and makes you want to learn more."
If a newcomer encounters so much unfamiliar territory that Haskell comes across as too alien to be useful, that only reinforces existing negative stereotypes about it.
Better to give a first impression of "you absolutely can pick up Haskell, and it'll be nice!"
> True, but remember that, say, a Rubyist arriving at haskell.org has no knowledge of:
> * what zipWith does
haskell:
zipWith f xs ys
ruby:
xs.lazy.zip(ys).map(&f)
While Ruby doesn't have a method of the same name, its not exactly foreign.
> * that (+) is a function being passed, not an operator being invoked
Again, sure, Haskell has different syntax than ruby, but the parens signal something special is going on here.
> * cons syntax for 1 : 1 : ...
Sure, but if you know what the Fibonacci sequence is -- and the reason it and primes are almost without exception the two things chosen for these infinite stream examples in every language is because its presumed that programmers do, its pretty easy to get the idea of what is being done from knowing what the function is trying to do and looking at the values presented.
> * how this could terminate (having no preexisting knowledge of laziness, remember)
A Rubyist that has no knowledge of laziness isn't a very knowledgable Rubyist, since laziness is an important and central concept in Ruby's Enumerable module (one of the core modules most frequently used by Rubyists.)
Anything that can be Googled and understood what it is trying to accomplish is less than 2 minutes.
I tried Googling "sieve" and got nothing of use, then "prime sieve" that has a good wikipedia article that is (probably?) about the correct thing but doesn't fit into the "easily understood in a couple of minutes) rule.
So... literally anything. Hello world. First impression shouldn't be that you need to be a math expert to use the language.
I think a neat algorithm to demonstrate laziness and Haskell clarity would be enumerating the Calkin-Wilf rationals. [0] It's quite a bit longer but demonstrates a number of neat ideas. I'll start first with a derivation which demonstrates all of the structure of the algorithm and then go through a series of mechanical transforms so that by the end I have a one-liner and a comparable Python implementation.
The first algorithm comes directly from the paper and uses an intermediary infinite tree to represent the rationals.
data BTree a = Node a (BTree a) (BTree a)
fold :: (a -> x -> x -> x) -> BTree a -> x
fold f (Node a l r) = f a (fold f l) (fold f r)
unfold :: (x -> (a, x, x)) -> x -> BTree a
unfold f x = let (a, l, r) = f x in Node a (unfold f l) (unfold f r)
breadthFirst :: BTree a -> [a]
breadthFirst = concat . fold glue where
glue a ls rs = [a] : zipWith (++) ls rs
allRationals :: Fractional a => [a]
allRationals = breadthFirst (unfold step (1, 1)) where
step (m, n) = ( m/n, (m, m+n)
, (n+m, n) )
In 16 lines I've got an infinite binary tree, its natural fold and unfold, a breadth first search, and a lazy algorithm for generating all of the rationals with no repeats. The whole thing is simple, natural, beautiful, and efficient! It demonstrates infinite recursive types, laziness, higher-order functions, and bounded polymorphism.
And also a neat algorithm!
The downside is that 16 lines is pretty long.
By inlining the fold and unfold I can get it down to 9 lines:
data BTree a = Node a (BTree a) (BTree a)
breadthFirst :: BTree a -> [a]
breadthFirst = concat . glue where
glue (Node a ls rs) = [a] : zipWith (++) (glue ls) (glue rs)
rats :: Fractional a => [a]
rats = breadthFirst (generate (1, 1)) where
generate (m, n) = Node (m/n) (generate (m, m+n)) (generate (n+m, n))
If I'm allowed imports we can use Data.Tree and make this a one-liner!
Finally, if I go another route and fuse the fold and unfold together into a hylomorphism
data Trip a x = Trip a x x deriving Functor
hylo :: Functor f => (f b -> b) -> (a -> f a) -> a -> b
hylo phi psi = phi . fmap (hylo phi psi) . psi
allRationals :: Fractional a => [a]
allRationals = concat (hylo glue step (1, 1)) where
glue (Trip a ls rs) = [a] : zipWith (++) ls rs
step (m, n) = Trip (m/n) (m, m+n) (n+m, n)
we can hide the tree entirely and demonstrate `deriving`... at considerable cost to clarity! With a little more golfing (read: inlining) we arrive at this beauty:
allRationals :: Fractional a => [a]
allRationals = concat (go (1, 1)) where
go = glue . next . step
next (a, b, c) = (a, f b, f c)
glue (a, ls, rs) = [a] : zipWith (++) ls rs
step (m, n) = ( m/n, (m, m+n), (n+m, n) )
which at least has the bonus of demonstrating some nice co-recursion between go and next. Or even, ultimately:
allRationals :: Fractional a => [a]
allRationals = concat (go 1 1) where go m n = [m/n] : zipWith (++) (go m (m+n)) (go (n+m) n)
which is actually kind of nice again if almost all of the structure has vanished.
Note that if `interleave` were part of the Prelude then we could write
allRationals :: Fractional a => [a]
allRationals = go 1 1 where go m n = (m/n) : interleave (go m (m+n)) (go (n+m) n)
It's also utterly incomprehensible for someone who hasn't seen Haskell before. Whereas with the existing example, one can at least piece together an idea of what's going on.
The point is to demonstrate the directness of expression and conciseness of Haskell, not to show how to create an efficient implementation of an involved algorithm.
I agree that the first formulation is a bit incomprehensible, though two-liner breadth-first search is understandable if a bit amazing.
Some of the latter versions (and perhaps ultimately the very last version) are easier to walk through for a beginner, though, and are calculated from properties expressed in the first.
To be honest: This would turn me off even more than the current example which is also hard to read/understand as a non Haskell programmer.
The fibonacci example in another comment in this thread however is very easy to understand and would fit much better.
Well the one-liner form I probably have overseen. It's a lot better than the other variations but I do think that the fibonacci example does show Haskell in a much more understandable way than the allRationals one-liner.
But maybe that's a bit me: I don't particularly like one-liners because as an outsider it takes usually a bit more time to understand it than more lines..
FWIW I think the sieve is a great example. Fibonacci is trite and a toy, whereas this is nontrivial and shows off a lot of what's powerful about Haskell (infinite lists, list comprehensions, pattern matching with cons...).
It's complex enough that it encourages people to stare at it for a few minutes and engage with it, which is also good.
I get paid to write C++ code. I'm pretty good at it. I understand it's normal usage syntax very well. I don't fucking know what a god damn thing means in Haskell.
λ 5 + 7
12 :: Num a => a
What the christ? The 12 I get. Got it. The colons? Not sure. I think it's just a dumb separator. Num is type! What the hell is a => a? I have no idea.
In the top they have an example.
primes = sieve[2..]
where sieve (p:xs) =
p : sieve [x | x <- xs, x `mod` p /= 0]
Jesus. Where to begin? First of all I had to consult a dictionary for sieve. I guess primes is a thing (function?) that takes 2 or more things? Not sure. Not sure what 'where' means in this context. Don't have a good guess. Have absolutely no fucking clue what "p:xs" means. Also no idea what "p : sieve" means. I am equally stumped as to what "[x | x <- xs," means. The "x 'mod' p" I can guess! The /= 0 I'm not sure. Maybe "/=" is equivalent to "!=" ?
And here's the thing. I've tried to read a dozen or so tutorials on Haskell and I give up every time because the syntax isn't explain. Please, please for the love god just tell me what your abstract symbols represent!
I remember being thoroughly confused by what I was reading back when I started learning Clojure, after years in C# and ruby. At first I thought no normal human being could understand just what the heck was happening amongst all of those parentheses, but eventually it became second nature. Now I've probably written a good 50KLOC in the language and to me it feels semantically dense, but nonetheless very readable.
I then remember switching to Haskell and thinking the same thing: "What the heck is this alien-speak mumbo-jumbo, how can anybody understand this gibberish?! Phooey!". Except this time I could actually remember myself saying the same exact thing 2 years earlier about Clojure and knew that if I just persevered, it'd eventually feel perfectly natural. It's been a few months, and it feels more natural now, even though admittedly Haskell has a lot more language constructs than the very basic lisp, so it takes a bit longer to build up that visual pattern recognition.
For me it's mostly about codebase scaling, refactoring and maintainability. Clojure is liberating and exciting when you're writing a tiny little project, but it's a whole other experience when you need to refactor dozens of files because you changed the format of the data being passed around, or you're changing an internal API that's called from a hundred different places.
You better have perfect code coverage, or you'll have no clue why and where something broke (the sink/source problem) or perhaps you won't even find out for a while because that scenario wasn't sufficiently tested and it slips into production. Having a compiler nag you about type inconsistencies is incredibly helpful in these scenarios.
The other big one is working with large blobs of data. Our product has a large analytics component to it, and massaging giant, deeply nested maps representing a certain compendium of statistics is really tough without the compiler spotting you. None of this is an issue when you have to satisfy a certain type, the compiler will basically give you a checklist of things to fix when you change something.
With Haskell you're getting all of the benefits of Clojure (expressiveness, leverage etc), plus the really useful addition of types and enforced purity on top.
I'm actually a Schema user, I love it, it's pretty convenient for addressing the boundary issue, since you don't have types and cannot really enforce that outside input conforms perfectly to the expected schema without run-time validation.
Schema helps (I used to use Mississippi back in the day with all sorts of complex checks, it was ugly), but it's more of an "add-on" in the end. I actually have a reasonably convenient flow for validating outside API input, business logic constraints and performing the necessary operations against the data store all in one big error monad nowadays. The problems is that I still have to go out of my way to add these input/output validation checks to everything when I could just get it for free through the type system.
Look, don't take my word for it: I strongly encourage you to try the two paradigms yourself and see what you think. Unfortunately it's very difficult to see the downsides without a large project, but you can probably extrapolate.
It's wonderful to see so many comments on Haskell! I began learning Haskell about a year ago. Coming from the world of C/Java/Ruby/Python, venturing into Haskell has been a phenomenal experience. While frustrating in the beginning, the payoff has been worth it. For me personally, I still wake up excited by the language & what’s left to discovery.
Aside from its technical merits, there's a certain expressive beauty and power to it. For example, I recently wanted to write a command-line version of 2048 in Haskell. Instead of being bogged down in the minutia of keeping track of array indexes and state variables, it was simply a matter of transposing lists (corresponding to board rotations). The entire game fit onto a single screen of code that reads like English (once you’re accustomed to the syntax - please don’t be scared off by that!)
I spent some time learning Clojure. From my limited experience, it’s certainly easier to get up to speed writing working programs with Clojure. The simplicity of LISP syntax is hard to beat. And the fact that it runs atop the JVM makes things like cross-platform GUI programming a breeze compared to Haskell. But while macros are powerful, it really doesn’t compare to the flexibility & composability of Haskell. Haskell provides some powerful abstractions which make building software easier, and these capabilities simply aren’t possible in other languages — check out the Tony Morris videos on monads & monad transformers for more details [1]. He also explores why these abstractions, if they’re indeed so powerful, aren’t currently more prevalent in software engineering. He’s dedicated to changing that.
LYAH is a great resource, but it can be a bit verbose at times.For those interested in diving into the language right away, I recommend checking out the University of Virginia CS 1501 Haskell lectures [2]. They start with the basics and gradually build up to more advanced concepts like functors, monoids, monads, etc. They even have a section on category theory at the end.
For a great intro to web development with Haskell, see Ryan Trinkle’s talk on creating a link shorter (using the Snap web framework with a PostgreSQL backend). [3] Live demo [4]
For a more formal CS-style introduction to Haskell, see the lectures by Prof. Dr. Jürgen Giesl. [5]
Being a god in C++ won't give you anything regarding foreign paradigms. Are you familiar with other FP languages such as ML, Miranda or even Lisp ? More than syntax it's the semantics that differs a lot. Laziness, Immutability...
Not really. No. And that's the problem! You're right that semantics are what make the languages truly different. However it is literally impossible to even begin to understand semantics if you don't know the syntax. That's what is so frustrating. It might as well be written in Kanji. That's how meaningless it is to me and, I believe, most programmers.
That's the thing though. Because you have a ton of experience with C++, then you'd probably be able to pick up ruby, python quite quickly (if you don't know them already), because despite the superficial syntatic differences, their semantics are, in the grand scheme of things, close enough to those of C++. You'll trip up on some of the subtler differences, but you'll have a relatively easy time interpreting what the differences mean. When you first look at haskell, the underlying semantics are different enough that your prior experience scarcely helps you interpret the different syntax. Cue howls of frustration.
What you're saying is like arguing that the Japanese should switch from Kanji to Latin characters because more people worldwide use them. That's the language, it's not going to change. It's just silly to criticize Haskell/Japanese for not being immediately understandable without study.
All I'm asking for is a Kanji to Latin dictionary. It's not about changing Haskell syntax. And it's not about "studying" the syntax. Haskell has funny syntax with funny symbols. That's totally fine. I just need someone to say "see this symbol? that means cat!" Most tutorials do a terrible, terrible job of that.
I know I've been through this in college. To each his own, but I believe Haskell is too much higher order logic for people to start with it. By having deeply strict principles (lazy, immutable, monad, curryfication,...) they managed to craft a language very tiny and very abstract, too abstract as a starter. These principles are very easy ones when they're explicit. I can be shown in scheme to an average coder with no issue.
"Most programmers"? I started learning Haskell without previous knowledge of any functional language. The syntax was as strange to me as C was when I first picked it up.
The problem (I think) is trying to translate your knowledge of C or C++ syntax to Haskell, when they are so different. Try learning Haskell from scratch. Don't assume anything. Learn it like you would learn Japanese :)
Ok I'll take a stab at a super high level crash course:
A note! Haskell has weird syntax. But it actually uses this syntax in cool ways, as opposed to some languages that have weird syntax in an effort to be different.
Let's consider the basic unit of Haskell programming -- the function!
Here is a function which returns its input:
f x = x
Notice that there are no parens or types or anything. Since functions are used so heavily, polluting them with parens would make your code look uglier than lisp.
Now notice that I didn't include a type signature. In haskell, you can omit them if you want. The compiler will figure out the most general type. Think of this as C++11 `auto` keyword on steroids.
If I were to include the type signature, it would look like this:
f :: a -> a
f x = x
Let's break down those symbols.
`::` just means "is of the type". So `f :: ...` is "function f is of the type..."
`a` is a type variable. Think of this like C++ templates! In C++, an equivalent function is
template <typename T>
T f(T x) { return x; }
Where `T` is the same as our `a` in Haskell. In Haskell, it's common to use the first few letters of the alphabet for type variables.
The `->` syntax is a bit funny, but it makes sense when you learn about function currying, which I will not explain now because it can be intimidating to some. For now, just think of the type signature as
That means "The function f takes some argument of type `a`, and returns a value of the same type `a`"
Now let's consider a more complex function.
f x = 2 * x
As you can see, this is a doubling function. It takes a value and doubles it. We can imagine that the value it takes can't be any arbitrary type, because not all types can be doubled! So it doesn't make much sense to pass in a string.
In Java style, we might say that the type must implement the interface `Multipliable` or something. It turns out, Haskell has a system that is very roughly similar to Java interfaces or in C++, a class with a bunch of virtual methods.
Haskell calls these Typeclasses. So there is a typeclass that specifies that types in the typeclass must implement some basic numeric methods, such as +, -, etc. This typeclass is called `Num`.
So we can imagine, our doubling function only takes types that implement `Num`. Our type signature reads:
f :: Num a => a -> a
f x = x * 2
So everything between the :: and the => is typeclass specifications. So in this one, I just specify that type `a` must implement `Num`. Let's say I also required `a` to have the equivalent of a Java `toString` method, I would say:
f :: (Show a, Num a) => a -> a
So now `a` must have a `toString` style method and must have numeric methods.
Now we are ready to approach the `primes` bit.
The first foreign looking thing is [2..]. This is an infinite list from 2 to infinity. How can one have an infinite list? The answer is laziness.
Whenever I go and ask for the 10000th value, the list must extend itself to that value if it hasn't yet. So the list only grows for as much as you ask for, but in theory it will grow until you run out of memory.
The [m..n] syntax is just nice syntactic sugar because it's used so much. [1..10] == [1,2,3,4,5,6,7,8,9,10]
[1..] == [1,2,3,4,5,....infinity]
So this is what we know so far:
primes = sieve [2..]
Which means "primes is equal to the function sieve, called on an infinite list from 2 to infinity"
Now you ask, well what the hell is the function `sieve`?
Haskell has a nice sugar for defining scoped functions within other functions with the `where` keyword.
Consider the doubling function. I could instead do:
f x = x * (three - one)
where
three = 3
one = 1
So it just let's you define values that get used in the function body in a convenient place. This is akin to math jargon where people say "blah blah x something blah where x is blah".
So `sieve` is a scoped function (cannot be accessed from outside of `primes` that takes a list as an argument and apparently produces the primes that are in that list. How does it do it?
We see some weird syntax in the definition of `sieve`:
sieve (p:xs) = ...
What is this? It's pattern matching. Let me use fake C++ as an example. Consider the recursive factorial function:
int fact(int n) {
if(n == 0) return 1;
return n * fact(n-1);
}
Now imagine if C++ let you do this instead:
int fact(0) { return 1; }
int fact(int n) { return n * fact(n-1); }
That is, you make a special case function body when the argument is 0. At runtime, if the arg is 0, it uses the special case body, otherwise it falls through to the general body.
So in Haskell, we could implement factorial like
fact :: Num a => a -> a
fact 0 = 1
fact n = n * fact (n-1)
Very clean and sexy! But now you say "Yeah, but I could just use an if statement or a switch..."
Now we go into the funny syntax and power of that funny syntax! Not only can pattern matching match on values, but it can decompose those values. What do I mean? Again, let me use some fake C++ as an example.
Consider the C++ function
int f(pair<int, int> my_pair) {
int x = my_pair.first;
int y = my_pair.second;
return x*y;
}
Ignore that there is no real reason to pull the values out into `x` and `y`. In a more complex function, I'm sure you can understand why it would be tedious to type `my_pair.first` over and over again and we would want to pull it into `x` or something.
Now imagine if C++ let us do this:
int f(pair<x, y>) {
return x*y;
}
That is, f takes a pair, and in the type declaration we decompose the pair into its first and second parts, assigning them to the variables `x` and `y`. In Haskell, this is trivial.
Consider a direct translation of the C++:
f :: (Int, Int) -> Int
f my_pair = fst my_pair * snd my_pair
Now the idiomatic Haskell, with pattern matching decomposition:
f :: (Int, Int) -> Int
f (x,y) = x*y
Slick, right?
So what is `sieve (p:xs)` decomposing?
In Haskell, the default [] list is a linked list (there are random access arrays in various libraries). That means we can stick a value onto the front of the list in O(1) time. The way we stick a value onto the front of the list is with the `:` operator.
So
x = [1,2,3]
y = 0:x
`y` is now [0,1,2,3]
Just like we can compose lists with the `:` operator, we can decompose them via pattern matching with it.
So consider the function (I'll omit the type signature for brevity)
f (p:xs) = print (p, xs)
Let's say we give `f` the argument [1,2,3,4]. What call to `:` would we have to make to get this list?
[1,2,3,4] == 1:[2,3,4]
So when we send [1,2,3,4] into `f`, it gets decomposed into ` 1:[2,3,4] ` where 1 gets put into `p` and [2,3,4] gets put into `xs`.
This is a very common idiom in Haskell. Let's say I wanted to make a function that added 1 to every value in a list. Ignoring that there are much cooler/efficient ways to do this in Haskell, let's make a specialized function for it.
addOne (x:xs) = x+1 : addOne xs
So let's read through that. In our pattern matching decomposition, we pull off the first value of the list and store it in `x`, and we take the rest of the list and store it in `xs`.
Now in our function body, we recompose a list with different values. Those values are `x+1` for the first value, and `addOne xs` for the second value. So we add one to the first value, and stick it into the front of the rest of the list after the rest has had 1 added to its values. You might notice now, that there is no bottom-out case for this recursion. So let's add one.
addOne [] = []
addOne (x:xs) = x+1 : addOne xs
So we pattern match to catch the special case of the empty list, which returns the empty list, and then we decompose the general case as previously explained.
So `sieve` is pulling off the first value, storing it in `p`, and taking the rest of the list and storing it in `xs`.
It is returning...
p : sieve [x | x <- xs, x `mod` p /= 0]
So, that is `p` stuck onto the front of...
sieve [x | x <- xs, x `mod` p /= 0]
And we know that sieve returns all of the primes within its argument, so in English...
p stuck onto the front of all the primes in [x | x <- xs, x `mod` p /= 0]
So now what the hell is [x | x <- xs, x `mod` p /= 0]? It is a list comprehension.
Wow, you should turn this into a tutorial on FPComplete or something. Maybe there could be an "explain" link under the Haskell example on the homepage and it link to a tutorial like this one.
I'm actually recoding my website right now with a webserver and framework I custom built in Haskell. Maybe I'll make some of my first posts on this topic in a more formal, less commenty manner.
Well, you can use it to match on any value. So it's more useful than just slicing up lists.
Python has a primitive form of pattern matching in the form of unpacking, for example, this is legal code:
def tuple_decomp( (a,b) ):
return a+b
You pass in a 2-tuple (1,2) and 1 gets put into `a` and 2 gets put into `b`. It will error on anything but a 2-tuple.
As I showed in my examples, which I think you might have overlooked if you only saw the list slicing example, it provides a nice way of writing base cases for recursive functions:
factorial 0 = 1
factorial n = n * factorial (n-1)
as opposed to
factorial n = if n == 0
then 1
else n * factorial (n-1)
It's also really useful if you have a special case that you can provide a better implementation for:
multiply 0 _ = 0
multiply _ 0 = 0
multiply n m = n * m
The _ means "ignore this argument, I won't use it". So in the case that either arg is 0, we just fast-fail and return 0. Since Haskell is lazy, if I say
multiply (f (g (h 999))) 0
where f(g(h 999))) is going to be some huge time consuming computation, the call will return instance because it will see the 0 and never even evaluate the first argument.
In C et al, the arguments to a function are evaluated before they are passed in, so if I say
multiply( f(g(h(999))), 0);
It will calculate the value of f(g(h(999))) and send it in. In Haskell, you just send in a sort of pointer which points to the unevaluated expression, which, if you ask for it, will be evaluated.
Using pattern matching for decomposition is particularly useful in Haskell because of the custom datatypes. In Haskell, you can sort-of think of a data type as a C-union of C-structs. Let's say I define a datatype for Pets.
data Pet =
Cat {
color :: String,
breed :: String,
annoying :: Bool
} |
Dog {
breed :: String,
trained :: Bool
}
So I can make a Cat or a Dog, but both are still of type Pet. They contain different parameters within them. So I can't ask a Cat if it's trained, and I can't ask a Dog if it's annoying.
Let's say I want to make a function to determine if I like a pet. My criteria are: I only like trained dogs. If a Dog is trained, I like it. As for cats, I like it if it is not annoying, or if it is Yellow.
Rather than a mess of if statements, it's much more elegant to use pattern matching!
So as you can see, I can pattern match on constructors (Cat or Dog), and on the values within the types without putting any code in the actual function body! Compare that code to the same code with if statements and tell me which is prettier!
The way I'd be interested in seeing a high level introduction would be by having some snippets of code in C, and then their equivalents of Haskell. Then an explanation, if that's needed. Because otherwise I'm spending most the time trying to understand what the code examples you wrote actually do in English.
Very true, but helpfully many other languages have a similar syntax (C, Java, JavaScript, even PHP's OOP system looks like Java) so it is easy to move between them.
This looks very alien. Is there any language with a similar syntax?
Yes, plenty of other functional languages. I have literally never seen a line of Haskell before but I know ML, Scala and Python, so I can read the line pretty easily:
primes = sieve[2..] where sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p /= 0]
[2..] is a list starting at 2; "where" is pretty clearly defining the function after using it. So sieve is a function; p:xs is a destructuring assignment or pattern or whatever you call it (I mean, xs clearly isn't a type, so I guess Haskell writes lists as 1 : 2 : 3 : Nil, where Scala or ML would use ::). (Which means this is a function that only works on a nonempty list, which seems a bit sneaky, but we can see that it's never going to be called for an empty one).
So sieve(p:xs) returns p:sieve(...), a fairly straightforward recursion. And the ... is fairly obviously a list comprehension I'd write in Python as "[x for x in xs if x % p != 0]".
Did I get it right?
The language certainly has an excess of symbols - I think the Python way of writing that list comprehension is much clearer - but it's certainly not incomprehensible; the "where" makes perfect sense, and the [] make a good visual distinction. The : form of lists is not the C way, but it's by no means unknown. The destructuring assignment might be unfamiliar, but once you've got that p:xs is a list there's only really one thing it could mean (and even in Python you can write (a, b) = something()). To a non-Haskeller it's kind of surprising that the code doesn't stack overflow from infinite recursion, but again, given that the code works, there's only one thing it could possibly mean.
This was much easier than the time when I tried to read some Clojure.
Edit: Just to be sure I got it, I rewrote the one-liner in Python. Had to use generators to get the laziness, and had to split it in two lines for the function:
def sieve(xs): p = next(xs); yield p; yield from sieve(x for x in xs if x % p != 0)
primes = sieve(itertools.count(2))
The syntax should not really be a primary concern when evaluating a new language. It is easy enough to pick up if you understand the concepts behind the language. Haskell has not all that many keywords and a lot of syntactic sugar, there are no public final static member variables and no insane *((&a)++)++.
That's a good attitude to have when approaching a new language. I should try a new language, if and when I have a need or need to learn some new concepts.
To complement some of the great answers here, I would like to point out that in Haskell, instead of objects carrying around a vtable pointer, the typeclass dictionaries get passed arount as extra implicit arguments to the functions.
This is important when you have binary operations like `+`. In haskell both operands must be the same type while with OO interfaces the operands can have different concrete implementations.
I wrote a page that tries to explain the syntax in a way that's appropriate for programmers (who don't need to be shown how the REPL can be used as a calculator and things like that): http://dv.devio.us/a-quick-look-at-haskell.html
"
λ 5 + 7
12 :: Num a => a
What the christ? The 12 I get. Got it. The colons? Not sure. I think it's just a dumb separator. Num is type! What the hell is a => a? I have no idea."
:: means "the stuff after this is the type signature"
Normally, a type signature can be as simple as something like
'x' :: Char
which is the type signature of the character 'x'. Looking at the type signature of 12 shows two parts "Num a =>" and "a". This can be read as "it returns a generic type `a` that must be an instance of the type class Num", which sounds really complicated but isn't.
The "Num a =>" is a type class constraint on the returned type "a". Type classes are basically like interfaces, they set up constraints and methods that need to be implemented for that type, analogous to how in other languages a class can implement an interface. For example, the Eq type class mandates you define the (==) and (/=) methods for that type, analogous to how something like the interface Comparable in Java requires you to define the compareTo method.
Thus the concrete type "Integer" is an instance of type class "Eq" because it implements those (==) and (/=)
Num is an example of a type class, just like Eq. But Num requires you to define a few more methods
class Num a where
(+) :: a -> a -> a
(*) :: a -> a -> a
(-) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
So all of the things you think of as "numbers" are all types that implement "Num", e.g. an Integer, a Float, a Double.
So going back, "12 :: Num a => a" means 12 can be any type which implements Num, which is really just a fancy way of saying "this number can be cast into any numeric type". You can perform this casting manually!
> I guess primes is a thing (function?) that takes 2 or more things?
Primes is an infinite list of integers containing all prime numbers. It's not a function, it's a value just like any other list of integers.
You obviously can't do something like that in C++ or most other languages. I think that's a pretty good example to demonstrate the difference between a pure and lazy language like Haskell and the more mainstream kind of languages.
If you're primarily a C++ programmer, Haskell should feel alien to you. It's a good thing. The differences in syntax are only skin deep, the differences in the semantics is what is interesting.
> Primes is an infinite list of integers containing all prime numbers. It's not a function, it's a value just like any other list of integers.
> You obviously can't do something like that in C++ or most other languages.
While the syntax is different, you can create a value just like any other iterable sequence that happens to be a generator for the infinite series of integers resulting from filtering another infinite series of integers by a particular function (which depends on the previous values of the series) in many, and possibly most, modern languages. (And the example here -- primes -- is pretty much the canonical example for every language.)
Definitely in most of the popular dynamic languages (Ruby, Python, etc.). And plenty of popular static OO languages, too. (Pretty sure both Java and C#, and definitely sure that Scala and other JVM and .NET static languages can.)
> you can create a value just like any other iterable sequence that happens to be a generator for the infinite series of integers...
I know that. Almost every language out there has some kind of generator/iterator protocol for "lazy" sequences. But they're not a list like any other list out there. You can (usually) iterate through them only once.
And you can't apply a generator/iterator type of deal to, say, a tree structure or any other non-linear data structure.
In Haskell, every value works like that (unless you explicitly tell otherwise). This is what the "primes" example is trying to show.
Honestly, we call them computer language for a reason, to a certain extent, you're saying you are very good at English but you can't for the life of you understand Japanese, all those weird symbols! Takes a while to be familiarized with a language and specially when the paradigm is completely different, not sure if it's a good or a bad thing (this paradigm change), but you'll have to make an initial effort, that's for sure.
I agree, especially about the ":: Num a => a" stuff in the tutorial. berdario's comment above is illuminating, but it was pretty confusing and distracting while doing the tutorial. Not to say it shouldn't be there (I'm guessing it's important) but it should be explained.
Some acknowledgement would go a long way-- in the first "slide" after you see that, a sentence about the output format seems appropriate.
if you want to write a times2 function(f(x) = x*2), its type would be (with redundant parenthesis added)
(Num a) => (a -> a)
so, a->a means it's a function that takes an `a` and returns an `a`... and `a` can be any thing that implements Num
The idea with haskell is that you heavily rely on polymorphism on the return types, this might make using code a little more awkward, since you have to explicitly say the type that you want to constrain to, but it makes writing generic libraries/APIs a lot easier, you might find this question interesting:
in Python, an infinite sequence of number, starting from 2
sieve something where sieve (p:xs) = yadayada
is usually written on multiple lines
sieve something
where
sieve (p:xs) = yadayada
if you know that you can define a function with
f x = something_with_x
it's somewhat obvious that you're defining a function called sieve that takes a (p:xs)
so, you're just defining a function and using it (with the [2..]) argument on the same line
(p:xs)
is destructuring, it basically take a list (`(:)` is used to `cons`truct lists) and assign the first element of it to `p`, and the rest to `xs` (xs is a commonly used name for this in Haskell)
so, it's like calling
sieve(list);
but you can define its signature as
something sieve(T head, list<T> rest);
(`something` will turn out to be `list<T>`)
/=
is indeed the same as !=, so (x `mod` p /= 0) is just (x is not a multiple of p)
[x | x <- xs, x `mod` p /= 0]
is a list comprehension, and is taking all the elements from xs that aren't multiples of p
p, as suggested by the letter, is a prime... so you're recursively filtering the elements who are not multiples of p, leaving only the ones who are multiple only of 1 and p themselves (that is, primes)
finally, you're concatenating the prime you're currently acting on (p) with the infinite lazy list of all subsequent primes
It's really distracting that typing in the REPL changes the size of the containing div.
The blurry photo of (I assume) the audience of a lecture just doesn't work - what's it supposed to do?
There's very little useful information - everything is at least one click away. If the whole purpose of the site is to 'sell' to a new audience then I guess that's not an issue, but I expect to be able to go to the site of a language & get very quickly and easily to useful manual/wiki pages explaining whatever feature I'm having trouble with.
The basic style is very 'modern' but it looks more like a Kickstarter page that's there to sell you something trendy and new than the front page of a site giving you information about a language. It's very pretty but (to me) sends completely the wrong message about what this site is.
Also typing in their code example (the one right at the top, pride of place) into their 'Try it' repl throws an error:
primes = sieve [2..]
where sieve (p:xs) =
p : sieve [x | x <- xs, x `mod` p /= 0]
<hint>:1:8: parse error on input `='" error
Now I have almost no knowledge of Haskell, so I'm probably making some really basic mistake, but I figure most visitors will try that and then get put off
Yes, that's certainly an issue. In fact the "Try it" box also seems to work differently from the standard haskell live interpreter, in which you'd be able to define functions by prefixing them with 'let '.
If you still want to try it out, say
let sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p /= 0] in sieve[2..]
λ primes = sieve [2..] where sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p /= 0]
<hint>:1:8: parse error on input `='
Doesn't work.
Now, I know enough about Haskell to know what I can and can't type into ghci, but what about people who are encountering Haskell for the first time? They'll try to run the given example in the "Try It" section and will get nothing but errors. Just my two cents.
P.S. Is Haskell still avoiding success at all costs? (A philosophy I continue to be okay with, but it seems to getting futile :) )
Haskell is still avoiding (success at all costs), the bracketing is very important. Haskell may seem like a slow to develop language and community at times, but this is because most users want to see well thought out solutions to problems, and if they have theoretical backgrounds showing that what they're doing is a good idea, even better.
> How often do programs crash because of an unexpected null value? Haskell programs never do!
> head []
error "Prelude.head: empty list"
Ok, so you don't call it `null`, but it still crashes the program. Its deceptive to claim null isn't present because it's conventional to avoid using "error" in favor of Maybe, but it's still there, and used throughout Prelude. The REPL on the site hides these errors too, which is going to be pretty confusing to a newcomer who takes the head of the empty list and gets back "null" (the absence of a value) of type a.
> (head x) + (head (head x))
:: Num a => a
Hiding errors makes it hard for a beginner to see what went wrong, although perhaps not in this trivial example.
There's a difference between crashing and returning null. If head returned null on an empty list, that null could be passed around to different places in the program before it crashed from a null pointer exception.
There can be unexpected crashes in Haskell, but not because of null.
It's not equivalent to null, but analagous to it. With lazy evaluation, I can pass also around the expression `head []` to different places, and it won't complain until I attempt to evaluate it - much like a language with null won't complain until you try to dereference it. Sure there's a difference though - you can see immediately where the problem is with Haskell's error, but it's more difficult to trace why a value might be null in imperative code (Which I think has more to do with mutability than the presence of null).
I was also highlighting that the tryhaskell REPL actually behaves like null, because it doesn't terminate immediately as you would expect.
Just yesterday I was discussing this topic with the author of the Mars language, because his record implementation is equivalent to Haskell's and is flawed in the same way (sum types + records leading to what is effectively the equivalent of `null`) [https://news.ycombinator.com/item?id=8005116]. Given a new language like Mars, he could provide a fix, but we have too much baggage to break Haskell's implementation.
> With lazy evaluation, I can pass also around the expression `head []` to different places, and it won't complain until I attempt to evaluate it - much like a language with null won't complain until you try to dereference it.
It won't complain until you try to evaluate it temporally, but the stack trace will point to the call to head.
Yeah. A major reason null is called the "billion dollar mistake" is that once you finally do get a NullPointerException (or the like), it can take a huge amount of time to track down where the null value originated.
If you take the head of an empty list in Haskell, you get an exception right away. Not a poisoning of the well, like you do in so many other languages.
That's absolutely a benefit of Haskell worth touting.
What makes it easier is immutability - if given an empty list, the only place that could've made this list empty is the place it was constructed - because there's no way some other function can come and delete items from it. A function which "removes" items from a list doesn't actually do such thing - it creates a brand new one and adds all the same elements except the items you requested being removed.
In this way, there's only one possible path that the list could've come from - through the pure functions which use it - until head is reached. Given that each function is referentially transparent, applying the same input list to a function in the debugger will always produce the same result, so it's simple to call a function with some sample data in ghci, and the result you get will be the same result you get in the compiled program.
Debugging/tracing is perhaps more difficult than with tooling you might already be familiar with - but it's much more rare that you need to even use them, because it's obvious what values a function should return - they don't have any state which could influence otherwise.
And there is a billion ways how to construct an empty list. You can't just grep for '[]'. It can be hidden inside of a 'catMaybes', 'tail', or any other function which returns a list and makes no guarantees about it's size, of which there are plenty.
So first of all, head as it is in Prelude is a partial function. The error isn't a null pointer exception, it is an issue with head not being defined on empty lists.
I personally would never use head in code I write. Instead I'd use a version which looks like [a] -> Maybe a which does properly solve the null problem.
On top of that, the compiler can warn/error if partial functions are used. I do this for all of my projects.
There has been a lot of debate over these functions should even existing in Prelude. I think they are a blight on the language.
For example, in the `head []` case, ghc has an option to locate the exact source of the error:
-xc
(Only available when the program is compiled for profiling.) When an exception is raised in the program, this option causes a stack trace to be dumped to stderr.
This can be particularly useful for debugging: if your program is complaining about a head [] error and you haven't got a clue which bit of code is causing it, compiling with -prof -fprof-auto and running with +RTS -xc -RTS will tell you exactly the call stack at the point the error was raised.
The output contains one report for each exception raised in the program (the program might raise and catch several exceptions during its execution), where each report looks something like this:
*** Exception raised (reporting due to +RTS -xc), stack
trace:
GHC.List.CAF
--> evaluated by: Main.polynomial.table_search,
called from Main.polynomial.theta_index,
called from Main.polynomial,
called from Main.zonal_pressure,
called from Main.make_pressure.p,
called from Main.make_pressure,
called from Main.compute_initial_state.p,
called from Main.compute_initial_state,
called from Main.CAF
...
There are a couple of Prelude functions that are broken, most notably `head` and `tail`. The problem is that they use non-exhaustive pattern matching in their definition:
head :: [a] -> a
head a:_ = a
tail :: [a] -> [a]
head _:as = as
Fortunately, use of these functions is unidiomatic, not just because they're one of the only ways that haskell programs can crash, but because they're less clear than the alternative, pattern matching. The haskell compiler will give you a warning by default if you write a function that performs a non-exhaustive match.
tl;dr: head and tail are mistakes, but they're never used.
I picked head and tail as primary examples, but it's not limited to them. As I discussed in the link I've posted to a discussion yesterday, one can easily create their own incomplete pattern matching by mistake, without the compiler complaining, by using records. eg:
data List a = Nil | Cons { head :: a, tail :: List a }
I encounter this misfeature of Haskell frequently - records and sum types don't mix, because they lead to an incomplete pattern match in the record fields.
Yes, it's unidiomatic Haskell, but a beginner does not know this, and just because it's conventional to avoid using it, does not mean it isn't still there - anyone could make this mistake.
Yes, the new page is signed by Chris Done at the bottom. I'm glad the Haskellers listened to his advice and accepted the proposed design; now it's time to tweak it with some real use.
Great timing, as I've just started to explore Haskell in the last few days.
First, I really like the style of the new site. My only gripe is that the section with the lecture hall photo and the videos right below it don't seem like an efficient use of prime real estate (and there is no context for the videos, I have no idea what they are or why I would want to watch it. The thumbnails suggest that I'd be clicking on an hour long lecture.). As a new user, I'd rather see the features in this space.
The box for "Try haskell expression here" is a little confusing, I tried to click on the white area to focus and didn't get any response. You apparently have to click on the same line, near the lambda symbol to focus?
Edit: Additionally, I really miss the lovely download page http://www.haskell.org/platform/ - and the non-home pages feel a little underwhelming and underdeveloped in general.
The "Try It" section really needs to have the text cursor be active anywhere I can click on the box to type. At first it made me think that I had to move over to the grey bracket-cursor (and thus a little frustrating/confusing/unexpected). Especially since I can click to the right (when the cursor is a normal pointer) and it gains focus.
This page is strongly trying to convert a viewer, the experience can be smoothed here.
I feel like the sieve is a poor example, since it's not actually the same algorithm as a sieve, and has a worse running time. It is possible to write a very nice lazy infinite sieve.
See O’Neill's "The Genuine Sieve of Eratosthenes" paper for details.
That example does match some high level definitions of quicksort; "pick a pivot element (the first element of the list), split the rest of the list into elements less than and greater than or equal to the pivot, quicksort those recursively, then reassemble the lesser, pivot and then greater elements". But I do agree it's not quicksort as it's usually known with its in place sorting and O(1) extra storage.
Love it! Looks like all that's missing are the "View Examples" links under the features.
One note: the quick walkthrough mentions "up there" in reference to the repl, but the repl is actually to the left of that text when viewing on a non-mobile-sized screen. It should probably just not mention the relative positioning.
I suspect Fibonacci would be a more familiar code sample than Sieve, no?
Would love to see a " this is where to use Haskell and why guide" just something a novice can see and say "oh this is why I should learn this instead of x"
I like the new site. It's beautiful and elegant. The weird flowers on the download page are gone. But the link to "Introduction to Functional Programming Using Haskell" takes me to the publisher page for the textbook "Aqueous Environmental Geochemistry".
I like how it says "Rock-Solid Ecosystem", yet I have had the exact opposite experience trying to install even the most basic things with Cabal. I still can't get the Sublime text haskell plugin to work due to a dependency that fails to compile.
I had a similar experience with Cabal. On one computer I haven't been able to install Yesod whereas on another one, it finally worked after I had wiped my ~/.cabal. It gave me the impression that Cabal's dependency resolution mechanism is still a bit britle.
Also I found that installing stuff through Cabal was pretty slow. It's probably partly because Haskell libraries tend to be kept narrow in scope so it's necessary to install a lot of small packages to get a piece of functionality (take for instance the dependency list of Aeson, which seems to be the recommended choice for working with JSON : http://hackage.haskell.org/package/aeson ). Another reason is that Cabal compiles Haskell code into native code.
The site feels too flashy, and the giant photo feels distracting. It would be nice if it felt "cleaner." As it is, I feel like a newcomer would find it somewhat overwhelming visually.
I think something nice and simple like Rust's homepage[1] would be nice. Or maybe something like Racket's homepage[2].
Something that bugs me is I couldn't copy the code in the upper right to the code "try it" section. I haven't used haskell for 6-12 months, so I'm not sure what I'm doing wrong.
It's a somewhat limited shell. You can't make local definitions. Normally you'd do that using `let` in GHCi, but this prompt only evaluates expressions.
I love Haskell and OCaml, and other functional languages. However, no matter how intelligent and higher-level functional language becomes, I don't think they can replace procedural language, especially for time-based programmings, such as user interface effects, animations, or network-dependent operations. Some features are inherently procedural by their nature, and functional language are not functional for describing them.
Many people know C++ and Java but have no sense for functional programming. It would help them to read equivalent known code to compare it with Haskell.
For instance FP Complete's Introduction to Haskell presents some ugly Java code which can be written in two elegant lines in Haskell (video around 03:25 min).
This gets brought up often, but side-by-side comparisons don't really work, because the languages are too different for them to be meaningful for anything but trivial expressions. To really understand Haskell, you need to somewhat unlearn Java - or at least, stop thinking in it (in terms of objects, state and method calling).
I think a good approach to thinking differently is to look how data structures are done - most programmers recognize arrays, lists, stacks and queues, but might be confused as to how you would implement such thing without side effects. A great resource to explain this is Okasaki's Purely Functional Data Structures[1], although it doesn't use Haskell at all, the ideas are relevant, and you can work through all of the examples implementing Haskell equivalents.
Scores high on layout, not so much on usability (I, too, took a while to realize there was a live REPL on the page) or readability.
(I'm one of those people who tried to learn Haskell a couple of times but lacked use/time to really get to grips with it, so I can actually parse some of it, but the samples ought to be simpler.)
Reading further, it looks like it is the correct implementation of "Turner's Sieve", but I don't think (though I may be wrong) that I'm in the minority that sees "prime" and "sieve" and thinks "Eratosthenes".
I think CIS 194 would be nice to be added in the book listing section? It's a great course with exercises and at the same time refer to chapters of both LYAH and RWH.
Lovely page, but that primes example is going to put off more people than it attracts, IMO. If you're not familiar with Haskell, or functional programming in general, the syntax and concepts shown there would appear alien and obtuse.
I like the look. A question: I have always used the "The Haskell Platform" download page and installer. The new download page looks like it is just Haskell, cabal, and default libraries.
I am running ghc version 7.6.3 and cabal version 1.16.0.2.
Haskell experts: should I do a fresh install? (I am on OS X and Ubuntu).
(Here we are writing list concatenation in infix notation, rather than the customary prefix notation). '[1,2,3]' is shorthand for 1 : (2 : (3: [])) in Haskell.
Thanks, I appreciate the walk through. So it's essentially just copying an array (in this particular example). I'd still argue that the syntax is not natural - since you have to have specific knowledge of the operator and the function.
Syntactic naturality seems like it's mostly a function of familiarity. That said, the example has, for a lot of mathematical reasons, a great deal of semantic naturality.
It's far from immediately obvious, but `foldr (:) []` is the way to copy singly-linked lists. In particular, if you look at a linked list as a degenerate tree then what `foldr f z` does is replace all of the nodes with `f` and the single leaf with `z`. Since cons, i.e. (:), creates a node and nil, i.e. [], is that single leaf then `foldr (:) []` is a very natural way to do nothing at all.
So it's kind of the most boring interesting example imaginable.
Depends on the intended meaning -- if it is that Haskell is advanced among purely-functional programming languages, rather than both advanced and purely-functional among programming languages, then the former is far better.
Looks really good, but as others noted, the examples should really work `as is` within the REPL. Also I think wikipedia has better example code. Everybody knows fibonacci and can compare it.
The downloads page is a bit too deemphasized; the big, bright Download button in the old page was a good call to action. However, I am really glad to see that the downloads page no longer recommends the Haskell Platform for OS X or Linux.
I wish there were better free book recommendations on the Documentation page. LYAH's style is obnoxious, and RWH has gotten quite long in the tooth.
I agree about the download page being too subtle: I _really_ love this new redesign - the old site was impossible to figure out what I was supposed to do or what was going on or what the hell "cabal" was. This new site is clean and clear: but still missing a nice big call to action (besides the live coding part).
Once you've looked at the live coding, the next thing is "community" which is great for docs, but not so great for "getting up and running" - which was also hard to figure out on the old site. I'd prefer to be lead from the live coding to clear instructions how to get Haskell running on my system.
I like the new homepage. One little nit-pick, I actually find the quicksort example a better candidate (instead of the sieve one). I think it highlights the expressiveness of the language a lot, and everybody implemented quicksort in some other language at one point, so they can compare.
I used to hate Haskell at University, until the final couple of weeks of the module when I started to "get" it. After that, exams and dissertation (which was in Python) got on top of me and I never revisited it. Now, coming back to it I am starting to remember the fun I had :)
I dislike this. Where is the 'call for me to do something'?
Why are there blurred people on an escalator(?). Don't make me pick between concise and reliable.
I'm more than willing to distill your messaging if it would help in any way. If only as an alternative.
Excellent! This is a big improvement. There seem to be some bugs with the "try it now" terminal, but it was easy enough that I actually gave Haskell a try after many years. It's got me interested enough to actually install it and start hacking away!
I find the disappearance of the Haskell text and logo in the navigation bar to be really strange. It feels like I'm going to a completely different website every time I click anything on the menu.
I think it should always stay there, or something else should be done entirely.
Just a nitpick: Why don't you stay consistent and use Open Sans for all the main text and the logotype at the top. Having "Haskell" in Ubuntu when the rest of the site is not stylized like that looks a little silly.
The "Try It" section doesn't load without cookies/localStorage enabled. Having run-time state should be fine for tracking where someone is up to on the tutorial; persistent state shouldn't be necessary.
I don't quite like this. The old one had front page links to the haskell wiki. This design is really information undense. Also, links to Hoogle and packages would be fantastic in the front page
It's pretty! BUT! I get the feeling that I am viewing websites through a post box slot. This seems to be the trend with currently fashionable sites, with the wall to wall slot appearance.
Off-topic: I want to learn a functional programming language, and I was thinking to go with Scheme (because, you know, SICP...) Would there be any advantage for me to go with Haskell instead?
Scheme and Haskell are very different languages. Both are functional, but that's about where the similarities end. Scheme is impure, haskell is pure; scheme is strict, haskell lazy; scheme is dynamically typed, haskell static; all functions in scheme are variadic, all in Haskell are unary; etc. So it depends on what you're looking for, and what you're trying to use it to do. That said, here are some advantages of Haskell:
1) Speed. Haskell binaries are highly optimized and tend to be very fast.
2) Correctness. Haskell's type system is far more advanced than any other language with as much or more usage, and is very good at preventing runtime errors. (and can go a long way to preventing logic errors). Also, its purity prevents huge classes of bugs.
3) Interesting: Haskell introduces a lot of new concepts which can really open up your understanding of computer science. It's a lot of fun.
4) Forward-looking: many of the ideas introduced or popularized by Haskell, such as pattern-matching, type classes, no null pointers, etc, are manifesting themselves in the new languages these days (such as Rust and Swift). Haskell itself is also (slowly) making its way into industry. Learning Haskell, in some ways, exposes you to the "next generation" of languages and programming techniques. I'm not sure the same can be said of Scheme, which tends to be used pedagogically more than as a means to push the envelope.
Then again, I know Haskell a lot better than I know Scheme, so maybe I'm biased. But at least, it gives you something to think about.
By training I am not a computer scientist, but an applied mathematician, so it is a little difficult for me to make judgement calls on things like type system, lazy evaluation, unary/variadic (don't even know what that means). I am interested in AI (machine learning) and statistical inference. I was looking at Lisp/Scheme because I know that AI needs gave birth to Lisp. Plus you have newer things like Church[1] that extends Scheme to deal with probabilistic models.
Perhaps, but it wouldn't hurt to pollute your mind with SICP first. At least watch the Abelson/Sussman videos (an accelerated version for HP employees); the audio sucks on a couple of them (I mean, in a couple of the lecture videos the audio sucks with much greater force than it sucks in the rest of them, but it's still not quite as bad as Feynman's Robb lectures), but it's a small time sink for a lot of enlightenment.
I think Haskell deserves a concise, functional homepage. Ideally, without any side effects (like animation). Every URL should always return either the same content, or an appropriate monad.
I really, really want to get into Haskell and functional programming in general, but I have no idea what sort of project to build with it. Anyone have any good suggestion?
haskell can be used for pretty much everything (even hard realtime with a code generating dsl called atom), so just think about what you enjoy/like and maybe ask before for pointers to libraries you might use with that task.
if there is anything you like but is algorithmically challenging, all the better. it's very nice to express math in haskell.
How would the REPL look like with progressive enhancement?
"Thanks submitting a Haskell line! You're being redirected to the output of the evaluation. If you're not being redirected automatically, please click _here_"
There are a lot of other things we still need to do as well, like ensure all redirects and subpages work properly.
Source: I'm one of the Haskell.org administrators, and we pushed this out only today.