Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Haskell Is Worth Learning (atomicobject.com)
125 points by ColinWright on March 5, 2013 | hide | past | favorite | 108 comments


I have to admit that I'm really on the fence about Haskell. My major gripe with this point I think is point (3).

Haskell has (IMHO) gone down a similar road to Scala. Scala has literally tied itself into knots to make statically typesafe collections (amongst other things). I see parallels with Haskell (maintaining functionality purity but dealing with the "outside world" through monads).

I'm not saying this is wrong. What I am saying is that the gap between making something 99% consistent and 100% consistent is often huge in terms of complexity. This is why I think the so-called mixed paradigm languages (Python, Ruby, even C#) have done so well since you can use these programming models while still writing easy-to-understand imperative code.

This is also why I'm so bullish on the future of Go. Someone wrote a book called Learning Go that tells you most of what you need to know and you can knock that out in an afternoon. Go has a good model for parallel programming and a simple syntax. It has made the choice of simplicity over completeness (eg no "generics").

The net effect is that Haskell will never be mainstream (IMHO) because it's too complex whereas Go probably will be because it's incredibly simple.

We've had complex functional languages (eg Lisp) for decades. Yet they haven't gone mainstream. You have to ask yourself why. Some pride themselves in using something so esoteric. Others attribute the lack of popularity to poor marketing or bad timing. The answer is (again, IMHO) is that simplicity matters.

Now it should be noted that we're talking about perceived simplicity. You can write multithreaded code in Java, C or C++ but getting it right is incredibly hard yet it can appear simple. The goal of any language I think should be to narrow or eliminate the gap between perceived simplicity and actual simplicity. IMHO Go does a remarkably good job of this.

FWIW Python and Go also play nice with C.

It's true functional programming does change the way you think but I think writing idiomatic Python or Ruby will get you much the same benefit at much lower cost.


What Lisp are you talking about? If Lisp isn't simple, what is, then? Sure, if you want to become a skilled programmer in Lisp, it would take many years. But that is applicable to anything in life after all.

The original Lisp 1.5 Programmers Manual had 106 pages, including appendices and a glossary. Scheme specification is very small too.

It may be arguable that the lots of parenthesis is unappeling, but not that the language is complex.


I don't think simplicity is as much of a recipe for language popularity as you claim. Scheme is a very simple language, far simpler than any of the languages you mentioned, yet it doesn't get much use in industry. Standard ML is a simple language, simple enough to be formally specified even. Most computer science students easily pick it up and learn it as part of a compilers course, so it's not that the language is simple in theory but hard to grasp. Lua is a wonderfully simple language, and it's found a good niche in games and configuration, yet it's nowhere near the popularity of its older, more complex sibling JavaScript.

On the other hand, C++ is probably the most complex language on the planet and it's hugely popular. PHP is an incredibly complex language—it surprises almost everyone who uses it—and it's extremely popular as well.

Language simplicity is just one of many factors influencing a language's success.


PHP C and C++ are popular because they are imperative languages. For non mathematicians , "imperativism" is far more easier to grasp than functional , closures , first order, monads , ...

it is easy to understand what a IF statement does , what a FOR loop is , and a basic PROCEDURE or FUNCTION is.

It is not easy at all to think in terms of pure functions , immutability , etc ... because you need to actually design you code before writting it.

I dont think Haskell is hard because of the syntax , it is hard because it is a functional language.


You say that, but it doesn't match my experience at all. I've taught high-schoolers basic Java and they had all sorts of problems with for-loops and procedures. In fact, it took me a while just to describe what a (mutable) variable was! To many of them, the idea of changing value was not intuitive at all. It didn't help that Java coopted well-known syntax from high-school algebra like x = 10 to mean something completely different.

At my university, everyone started with a Scheme course followed by a Java course. The Scheme course starts out functionally, but never mentions it. People didn't have too many problems there except with recursion, and most of them got that after a couple of lessons.

The Java course? Not so easy! People did have problems with if statements (why is it a statement and not an expression?). They also had problems with figuring out what was a reference in Java and what was passed by value. A lot of problems with that particular facet, in fact. In fact, the people without any programming experience before college had more problems with Java than with Scheme, despite the fact that the Java course was second.

Basically, you're projecting your years of experience with imperative programming onto what you imagine a beginner thinks like. I do not think this is particularly accurate--it certainly doesn't match what I've seen in practice.

For people with literally no programming experience, functional programming is not less natural at all. If anything, it makes more sense, because it fits with the minimal mathematical background everyone has.


I began programming with Caml. I found TurboPascal much easier to use. So definetly began with functional programming.


None of the languages I listed feature purity, immutability, or monads. (Granted, Scheme and Standard ML do express a clear preference for immutability, but you can mutate all you want.)

And closures are hardly an uncommon feature at this point, I'd say. Even C++ and PHP have them.


Haskell is also very simple--not from an implementation standpoint but from a semantics standpoint. Having polymorphism with no sub-typing (and no casting) is conceptually simple and easy to work with. Parametric polymorphism (like Java's generics but simpler and less horrible) is actually an extremely simple concept. The difficulty comes from a) implementing it in a stupid way after the fact (cough Java) or b) having sub-typing. Neither is necessary!

In this day and age, semantics are far more important than implementation.

You can fit Haskell's evaluation rules and its typing rules on one page.

Haskell's syntax is also very simple and consistent. It has fewer constructs than most imperative languages--fewer constructs than anything short of Lisp. It just also happens to be much more flexible than other languages.

Moreover, much of Haskell's syntax is very transparent syntax sugar. You can easily desguar it in your head. It makes code nicer to read but does not add any real complexity because it trivially maps to a bunch of simple function calls.

Most of Haskell is a very transparent layer over a typed lambda calculus. Lambda calculus is basically one of the simplest possible constructs. Ignoring the type system for a moment, it has literally three concepts: functions, variables and application. We then throw in some very straight-forward extensions like numbers, add a bit of syntax sugar and a type system.

The type system is also surprisingly simple. It has to be, for the inference to work! It's also very consistent in the way that is almost unique to mathematics. Consistency is pretty important.

This is where I shall bring up the "Simple Made Easy"[1] talk. It comes up a lot in these discussions, for a reason: most people mix the two up. I don't agree with all the points in the talk, but the core message is completely correct and very valuable.

[1]: http://www.infoq.com/presentations/Simple-Made-Easy

Simplicity is valuable. And Haskell, for all its being hard to learn, is simple.

IO is a great example here. Monads are difficult to learn, granted. But they are not complex. Rather, they are abstract. In fact, monads are extremely simple; the actual difficulty is twofold: it's not immediately obvious why they matter and they're too abstract to permit any analogies. Ultimately, a monad in Haskell is just any type with three simple functions that behave consistently--it's just an interface.

Go is not particularly simple; rather, it's easy. It's familiar. The syntax is more arbitrary, but it is C-like. The built-in constructs like loops are more complex and arbitrary (Haskell, after all, has no built-in iteration at all), but hey, it's C-like. The exposed features? Again, fairly arbitrary.

That's how I would sum up Go's design: arbitrary. And mostly C-like. Where C itself is pretty arbitrary. Especially from a semantics standpoint.

Essentially, Go has whatever the designers felt like adding. Just look at all the different ways you can write a for-loop! Or the fact that you have a loop at all. Haskell, on the other hand, has a deep and elegant underlying theory which ensures that different parts of the language are all consistent.

Haskell is much less arbitrary. Most of the features naturally go together. Many are just generalizations or facets of the same concept. Even the complicated, advanced features like "type families" or "GASDTs" are just fairly natural extensions of Haskell's basic concepts. It's very much akin to mathematical ideas, which have an elegance and consistency eluding most other languages.

Here's a particular example of how the features fit together: algebraic data types. Haskell essentially has two fundamental ways to create data types: you can combine fields into a record (like a struct) or you can have a choice (a tagged or disjoint union). The really neat bit? These aren't arbitrary--they're actually deeply related. In fact, they're duals of each other. Having both makes the most sense.

It also gives you a much better way to signal errors. In Go, for whatever reason, errors are essentially built into the language as an implicit tuple. However, in practice, you either have a result or an error. If you have an error, the result is meaningless; if you have a result, you shouldn't have any error! So it makes much more sense to represent errors as a variant, a choice--a sum type. This lets Haskell avoid baking in error handling into the language, making it simpler.

Haskell is as mixed-paradigm as the languages you listed. Those languages are imperative with some weak support for functional programming. Haskell is functional with some weak support for imperative programming. It's the same idea, inverted. Except Haskell can also support things like non-deterministic and logic programming. It's just that, for some reason, when people say "mixed-paradigm" what they really mean is "imperative with some functional support" and never "functional with some imperative support".

Sure, Haskell's syntax for mutable structures is awkward. But have you seen C#'s or Python's or even Go's syntax for functional programming? Compared to Haskell, it's just as awkward! And Haskell's "syntax" for mutable constructs is just a library; it can be improved. It just turns out that imperative features aren't useful enough for experienced Haskellers to warrant the improved syntax. (Also GHC sucks at optimizing sufficiently complex imperative code, I gather.)

There's a nice example of what you can do on Lennart Augustss's blog[2]. He essentially embedded a very C-like language into Haskell without using macros. So it's certainly possible, just not worth it.

[2]: http://augustss.blogspot.com/2007/08/programming-in-c-ummm-h...

So yes, perhaps Haskell will never be popular. But that's a social issue. It is not an issue of the language's qualities.

And it shouldn't stop you from using Haskell. At your startupt. Hint, hint.


It is an issue of the language's qualities if it does not really make it easier to reason about code.

I find reason to question the simplicity of something which is widely acknowledged to take a lot of time to learn, to be mind-bending, and which seems to be impossible to explain simply - without deep theoretical background, academic citations or oversimplifications acknowledged as misleading.

I think it would be mature for the Haskell community to occasionally acknowledge a trade-off of the language. Haskell's flaws are not all "social issues." The virtues of survivors like C and LISP are not all "social issues".


Hrm, I have found the main benefit of Haskell is that it makes it far easier to reason about code. The separation of side effecting operations from non-side effecting is huge.

Also I am very comfortable with monads and have never dug into the theoretical category theory side of it.

I bet you could be writing code in the IO monad within a day with some proper guidance. It really isn't hard at all.


Simple does not imply easy. As an extreme example, a unicycle is simpler than a bicycle--fewer components, simpler structure, no gearing--but also more difficult to learn.

Really, I'll just have to point you to the "Simple Made Easy" talk again. The core point being that there's a difference between something being "simple" and something being "easy", and we should generally strive for the former rather than the latter.

Having a deep theoretical foundation is also not a sign of complexity. Instead, like most of math, it's usually a sign of simplicity. After all, math always strives for elegance and simplicity.

What it means is that a lot of smart people have spent a lot of time thinking things through using a strict framework for reasoning that ensures everything is consistent. The theoretical framework lets us simplify by recasting different concepts using the same fundamental ideas. If we can capture things like state, errors and non-determinism using a single concept, we've made things simpler because now we have a common ground and relationship between seemingly disjoint ideas. This is exactly what Haskell (and the theory behind it) does.

This theoretical foundation, coupled with the relative simplicity and consistency of the language, actually make code much easier to reason about in Haskell than in other languages, except for some performance issues. Basically, as long as your main concern is in semantics--and, for 90% of your code, it is--Haskell makes life easier than any other language I know. You can manipulate the code purely algebraically, without worrying about what it does, and be content that the meaning remains the same.

Having well designed libraries with actual algebraic laws governing their behavior, a powerful type system and very transparent syntactic sugar is what makes the code particularly easy to reason about. A simple, elegant semantics also really helps. You can really see the influences of a good denotational semantics when using the language.

Now, reasoning about performance is sometimes an issue. It's certainly reasonably hard without additional tooling. Happily, there are some nice tools like criterion[1] to make life easier for you.

[1]: http://www.serpentine.com/blog/2009/09/29/criterion-a-new-be...

Also, the Haskell community does acknowledge trade-offs. They're just not the same trade-offs that people not knowing Haskell lambast. Which should not be a surprise--you can't expect somebody who hasn't really learned Haskell or its underlying ideas to have a thorough idea of what its real problems (or even its real advantages) are.


Watching the video now...

It seems beautiful, enlightening and wrong.

It might be described as a powerful statement of software idealism. Essentially, start simple and stay there, the problems, the mess, the mythical-man-months, etc all come because the developers refused the effort needed for simple and impatiently descended into the swamp of complexity.

I too, love starting simple and usually intend to stay there.

But the problem I would suggest, is the complexity will build up and simplity-as-the-simple-methods you've learned, simplity-as-such, can't fight this build-up. If being simple COULD put an end to complex situations, you wouldn't have to START simple, you could use simplicity to "drain the swamp of the complex". But every methodology more or less says that you have to be on the top of its mountain and to stay there (except original OO and we know how well that worked).

My contention is that this "mountain dwelling" is only possible at times, in some domains, in some organizations, etc. Humans can, at times, carve simplicity out of the swamp of complexity. But it isn't easy and it isn't a product of any fixed set of simple tools we human have come up with so-far.

Mr. Hickey's viewpoint might be useful for selling simplicity and I would be willing to use it if I thought simplicity would be a good buy for my organization. But the reality is tradeoffs never good away. Sometimes people overestimate the value of short term payoff but sometimes people overestimate the value of long term payoffs. The one thing that I think I want to keep here is the clear, simple distinction between "ease" and "simplicity". It's useful even if it might not be entirely, true.


That's not fair. Remember how long it took to learn how to program for the first time? Haskell is so different from imperative programming you should approach it like that.


>It is an issue of the language's qualities if it does not really make it easier to reason about code.

The primary point of haskell is making it easier to reason about your code.

>I find reason to question the simplicity of something which is widely acknowledged to take a lot of time to learn, to be mind-bending, and which seems to be impossible to explain simply

It takes a long time to learn any programming language. You create an invalid comparison when you compare learning language X++ after already learning X to learning language Y++ without having learned language Y. Haskell only takes longer to learn if you compare it to learning a language that is virtually identical semantically to a language you already know. And I don't know why you think it is impossible to explain haskell simply, there's a reason everyone points to learnyouahaskell.com when people ask how to learn haskell.

>The virtues of survivors like C and LISP are not all "social issues".

How is lisp a "survivor" exactly? Haskell is more widely used than any lisp is.


The tiobe index has lisp in the top 20 (at 13th) while Haskell is at 33rd, so lisp is more popular than Haskell even though lisp is over a half century old. Being in the top twenty after 50 years looks like the very definition of "survivor" to me.


Lisp isn't a language, it is a whole bunch of languages. Lumping half a dozen languages together obviously moves it up the list. Being old is working in its favour, not against it. Older languages have more written about them purely because of the time they've existed. Pick a specific lisp and try your comparison again.


> You can fit Haskell's evaluation rules and its typing rules on one page.

Evaluation rules yes, but typing rules? Once you add in features like records, GADTs, type classes, functional dependencies, type functions, equality constraints, associated types ... you end up with quite a complicated system. Maybe you can state it on less than a page if you use a small enough font, but the system is complex. In contrast, C semantics might be large, but they're not complex. Unlike with Haskell's type system, there are no difficult interactions among all the features.

That's one of the reasons people are investigating dependently typed languages. They can offer a simpler and more powerful type system.

In addition to this, the language isn't even the most difficult part. So much of the difficulty is in learning the libraries and concepts associated with the libraries (functors, applicative functors, monads, iteratees, zippers, arrows, etc.). This may be further along the "hard" axis than the "complex" axis, but it's definitely not simple either.


Actually, your examples serve to counter your argument. All of the things you mentioned are very simple, and are in fact implemented in terms of the core language semantics. They are also non-standard extensions, not part of haskell. You do not need to know them or use them at all.

Your final part is just plain nonsense. That is like claiming C is complex because you need to learn things like hash tables and linked lists and binary trees.


Define simple. You're appealing to the Turing tar pit argument. The fact that some core language is simple, doesn't mean that the language is simple in a practical situation. That applies to language constructs specified in terms of the core language and even more so to the libraries. We can define Common Lisp in terms of a small core language, heck we can even consider it a library of macros. That makes all the constructs in Common Lisp "just like" hash tables by your classification. Does that mean that Common Lisp is simple? Of course not. You have to consider what has to be learned in practice.

If you think that e.g. the interaction between GADTs and functional dependencies is simple, that's crazy. These things may not be part of Haskell98, but they are part of Haskell from a practical viewpoint, and many libraries make use of these extensions. You will have to learn it if you want to do serious work in Haskell. The same applies to the library concepts.


You are simply making shit up at this point. You absolutely do not, ever, under any circumstances, need to learn or use GADTs or functional dependencies. That is complete and total bullshit. Using a library that uses those features does not require you to learn them, that is the entire point of a library, to hide that from the user of the library. And yes, common lisp is a simple language.


Haha okay, I suppose if you think Common Lisp is simple, then Haskell is simple too. Most people however, consider Common Lisp the opposite of simple. As for the concepts that I mentioned, you explained it yourself very clearly in another comment of yours:

> You need to understand monads to do anything beyond trivial exercises. It is something that virtually every single person coming to haskell from another language is unfamiliar with. I don't see how a focus on such a fundamental aspect of the language is a bad thing. -- http://news.ycombinator.com/item?id=5326342


If you think common lisp is complex, you have no idea what the words complex and simple mean. I know you have to know monads. That does not make haskell complex. Just like needing to know hash tables doesn't make C complex.


Here's my challenge, echoing a comment further down: If you want to convert folks to Haskell, write something useful in it. Then people might actually be interested in learning more about it. That's the only way you'll get converts, not writing boring, condescending lectures. My programming language prof tried to use the entire course to indoctrinate students in Haskell. He failed. None of his ramblings about how "pure" Mondads or such and such was or any of his homework assignments ever convinced us that Haskell was a better way. I don't expect you to get much further. For me, the rub was that he never showed a real world application. I took that as proof that the whole language is asinine, and that the claim that it gets rid of the dreaded "state" was bs. Another thing that always irked me: pretending that mathematics has nothing to say about "state". That's pretty funny coming from Haskell fans, who like to fancy themselves mathematicians.


Just browse hackage. http://hackage.haskell.org/package/simpleirc is a good starter.


Okay, it sounds like your course managed to sour you on Haskell without teaching you anything. Your entire tirade feels like a straw man born from ignorance. You even managed to misspell "monad".

You're simply not in my audience at all--you have too much of a predisposition against Haskell. It's not worth trying to convince you, or anybody similarly biased, because there are so many other people willing to hear me out.

So yes, maybe I won't get any further than your professor. No big loss.

Anyhow, why do you think Haskell--the language with libraries dedicated to managing state--pretends that state doesn't exist? If anything, Haskell is the only language that takes any sort of mathematical approach for modelling (and thus managing) state at all!

There's a reason why some of the most progressive and mathematically sound ways of dealing with state--my favorite example is functional reactive programming--take root in Haskell. If all you want are mutable references and data structures, we have that too. Cleverly integrated with the type system, to boot. We even have some of the best concurrency features (which are naturally based on mutable state!) like STM. STM that's not only actually usable but actually easy.

As for software written in Haskell? There's already plenty. Pandoc is simply the best in its class--I don't think it has any real competition, even. XMonad is a great window manager. Darcs is a dvcs that existed before Git took off, and has a clever model. I use Hakyll for my website, as do some prolific HNers like gwern, and it's great. Gitit is a nice, lightweight wiki. Git-annex helps you manage files on top of Git. The backend for DeTeXify, which everyone using (La)TeX should be familiar with, is written in Haskell.

And these are just the things I could think of from the top of my head, mainly things I personally use.

All these are practical utilities that you might use. If you're willing to look further afield, there are all sorts of more specific tools like Agda and a host of DSLs for everything from SMT solvers to realtime embedded programming.

Then there are the rich and relatively impressive web frameworks like Snap, Yesod and Happstack. Yesod in particular is very fully featured and useable; it has some very cool sites built on top of it including the recently released School of Haskell.

What about stuff I'm personally working on? If you're playing around with the GreenArrays chip, I currently have a simple simulator for the F18A instruction set as well as simple system for synthesizing F18A code using a randomized hill-climbing algorithm. Unfortunately, both are currently limited to one core, but that should be easy to fix. I was also working on a DSL for generating F18A code, but that fell by the wayside recently.

So clearly people are writing tons of useful software in Haskell. And people are using it. But that obviously won't satisfy you. Which, as I said above, is fine.

But if you're actually somebody else--preferably either a startup founder or somebody with control over what technology to use--you should definitely give Haskell a whirl!


There's plenty of useful software written in haskell. Why is this particular nonsense so commonly repeated with haskell? Just because you don't bother looking at the software written in haskell, doesn't mean it doesn't exist.


hmmm - your measure of "complexity" is not the consensus one.

Scala has a lot of machinery in its type system, and Haskell has very little. LISP has even less.

Scala may have been inspired by Haskell in some ways, but the experience of writing it is very different.


you might prefer ocaml (i do) - it has a lot of the functional power of haskell, but is comfortably multi-paradigm, and in particular pushes neither purity nor laziness upon you. also, the ocaml community has been considerably revitalised in the last year or so, so now is a good time to get into it


I understand F# is in the ocaml family. Any reason why one would prefer ocaml over fsharp?


There are several reasons why one would prefer OCaml over F#:

- OCaml is not tied to .NET. Obviously the converse also applies: if you need .NET integration then F# might be a better choice.

- F# does not support some of the more advanced features of the OCaml type system, like polymorphic variants (open unions) and functors. These are very useful OCaml features, IMHO.

- OCaml has gained some very interesting features since F# branched off, which again are not present in F#: modules as first class values, GADTs, and better control over module signatures.

Overall, my recommendation is that unless you are really tied to .NET ecosystem, then OCaml is a more interesting language to learn.


A few counterpoints in favor of F#:

- Even if your project doesn't need .NET integration, it's nice to have all of the .NET libraries (built-in or otherwise) at your disposal. OCaml also has some nice libraries, but nowhere near what .NET (or Java, for Scala) has -- so it's likely you'll have to implement your own library for some task if it's outside of the mainstream.

- F# has also gained some interesting features since it's branched away from OCaml -- computation expressions (syntactic sugar for monads) and type providers (automatically-generated, strongly-typed interfaces to databases, JSON APIs, etc.), for example.

- .NET has a mature and very fast garbage collector tuned for real world usage (e.g., people running C#/ASP.NET on large webservers). This hugely benefits F#'s performance.

- You can use Visual Studio or MonoDevelop to write and debug F# apps, which makes it much nicer to use in practice. (Oh, and you can get Visual Studio Express for Web if you want to use VS and F# for free.)

I don't have anything against OCaml, I just wanted to point out that F# certainly deserves consideration if you want to learn a new language (i.e., it's not a knock-off version of OCaml).


On the other hand, if you can use .NET, F# is a more practical language to learn. The F# standard libraries and .NET libraries and IDE are miles ahead of anything OCaml has, which in practice far outweighs anything else. If you're picking a language to learn new language concepts, I'd definitely pick OCaml over F# for its modules alone.


You can write simple scala, if you use case classes, type inference, wildcards, optional parens/periods, return "this" to make little pointfree method chains etc. You also need to avoid parameterized types and many corners of the type system that go with them. Look in Subramaniam's Pragmatic Press book on scala, you'll see lots of simple code that any ruby/PHP dev can grok, for analytics, sysadmin etc.

Now it's true that a lot of people don't choose to write simple scala. Scala has a lot of features that other language designers know they need. Look how proud Hjelsberg was of co- and contravariance in C# 4.0 (I don't have a video waypoint, it's about 2/3 of way thru

http://channel9.msdn.com/Blogs/matthijs/C-40-and-beyond-by-A...

(Also look at Learn You Haskell, for pretty straightforward haskell code (straightforward because he didn't have page-space to go into a lot of GHC extensions and advanced stuff)


Also, this sentiment is popularized as worse-is-better: http://www.jwz.org/doc/worse-is-better.html


Comparing Lisp to C isn't fair. Both are arguably difficult to master, but one took off because chip manufacturers decided to optimize for it.


>I see parallels with Haskell

You are seeing things.

>I'm not saying this is wrong. What I am saying is that the gap between making something 99% consistent and 100% consistent is often huge in terms of complexity.

Haskell is a very simple, very consistent language. That is one of its biggest strengths.

>The answer is (again, IMHO) is that simplicity matters.

Except that lisp is not complex. Especially consider scheme, which is very simple and consistent.

>It's true functional programming does change the way you think but I think writing idiomatic Python or Ruby will get you much the same benefit at much lower cost.

Having gone from "I see the benefits of functional programming, but think a multi-paradigm language is the right way to do it" to outright revulsion when stuck using multi-paradigm languages, I think you might change your mind if you actually tried it. Ocaml and scala certainly sold me on haskell.


Oh, a bracket-style programmer (Java / C#) spitting his poison about Scala, Haskell and Lisp dialects all in one comment and somehow this gets voted up.

I'm calling appeal to autorithy now. No, not people with lots of rep on SO for answering Java and C# question.

I'm calling appeal to pg authority.

This is ycombinator.com FFS!

Go read "Beating the average", "What made Lisp different", etc. There are many very insightful things in these essays.

After you'll have read them, you won't be upvoting these "I hate everything non-bracket styles / your language is a toy because it's not the most succesful one" comments. Actually after reading them you'll be downvoting these poisonous words...


I love haskell but I wish there was less blogging about why everyone else should use it and more building things that will make people want to use it.

The community is extraordinarily friendly to beginners, as long as you only count how they interact with beginners. If you count having resources that allow beginners to figure something out without having to ask then they are extraordinarily unfriendly.

If I find a potentially interesting library on hackage I'm no longer surprised to find it has no documentation at all, certainly no usage documentation or explanation of what it actually does or what it should be used for. Just code and autogenerated API docs. I'm surprised if there is anything that would allow me to actually use it. I'm not often surprised this way.

That and the wild west, "not invented here", multiple incompatible implementations of new ideas used in different important libraries (iteratees at the moment).

Not trying to throw blame around of course, but people probably don't need your "I have a new metaphor for monads" blog post or an argument for why it's worth learning. People need more posts like the 24 Days of Hackage(http://ocharles.org.uk/blog/) series and the Real World Haskell book and they need some brave person to try and convince those amazing library developers to standardize on an imperfect solution a little sooner and for a little longer (only in public of course).


I think generally the advice to beginners is to ignore most of Hackage and focus on the Haskell platform—which is frankly kind of terrible advice. It's intended to avoid things like the enumerator/iteratee/conduit/pipes madness and focus on a stable library interface, but almost all of the fun is being developed on the bleeding edge of Hackage.

I'd agree completely that this is a deficiency with the Haskell community, but I also don't see it stopping until there's a larger critical mass of new users—a chicken and egg problem, certainly.


I don't think it's a chicken and egg problem quite. It's true that the types of people that do these kinds of things arrive when you have critical mass, but that's just statistics.

I think you can recruit people to do this kind of thing, or at least encourage people to do so, but it would require community management and such which doesn't really exist in the haskell community. If I felt cruel I would nominate dons to do it, since it's so easy to want to give more work to the guy who's already doing a ridiculous amount of good work.


Why does this crap always get posted, and always manage to float instead of being downvoted? You know what other programming language community blogs about their language and tools? ALL OF THEM. Why do people suddenly insist it is a problem that haskell users do the same things everyone else does?

If every time something about ruby showed up on HN someone piped up to say "stop talking about ruby and actually build things people want" they would be rightly downvoted to shit.

The rest of your post is the same sort of nonsense. Some open source code has no documentation?! What a shocker, I'm sure no other programming language has ever had any undocumented modules released in it before. The bitching about iteratees is particularly disingenuous. You don't need to use any of the competing solutions, other languages have no similar library at all, and the entire thing is very new. "Hurry up and standardize your cutting edge libraries that you just wrote and are still exploring" is both insulting and stupid.


> You know what other programming language community blogs about their language and tools? ALL OF THEM

I don't see how that's a response to my complaint about there not being many haskell blog posts about languages and tools that aren't monad tutorials or "why don't more people use haskell". My entire point was wanting blogs about language and tools for fucks sake.

> Some open source code has no documentation?! What a shocker,

Since I said that the levels of documentation on hackage are much lower than the equivalent for every other language I use I'm again not sure what you are responding to. So yeah, a lot of open source code has no documentation. No shit. And the communities/languages that have more documentation have more users. Hackage is awful in this respect relative to other language communities.

The rest is just pointing out that a better haskell platform level of release management would let more people use Haskell productively, it wasn't aimed anywhere near the writers of any of the iteratees libraries.

Skimming over a comment and responding to what you wanted me to say could be considered "insulting and stupid".


>My entire point was wanting blogs about language and tools for fucks sake.

So go read them instead of making nonsense posts?

>Since I said that the levels of documentation on hackage are much lower than the equivalent for every other language

Bullshit. Go look at the docs for random 3rd party modules in any language. Tons of modules are totally undocumented.

Again, nothing you said is in any way reasonable. Everything you said applies equally to any other language, and would be rightfully downvoted in that context. It is sad that HN lets your turds float.


Haskell is just playing a different game. Clojure/Scala/F# might also be playing that game or something somewhat like it, but until you start to realize just how relaxing pure, statically checked code is you're coding in a tar pit.

Even the IO system is better because you think of it as nothing more than a way to manipulate and combine a kind of pure data corresponding to sequences of "real world" actions.


I was told to learn me a Haskell for great good. Is that not enough for people anymore?


I started with LYAH loved the first 6 chapters. Got bogged down on the 7th because it was more abstract. Now Im reading both LYAH and Real World Haskell and reading both makes a great combination. If I get bogged down in one the other keeps my momentum going.


LYAH is a good introduction, and is probably enough for a lot of people. Real world haskell adds more practical real world examples, and some people would benefit from reading it also.


I was thinking about this the other day and I think it generalizes beyond any specific language. I'm pretty sure every time I have learned a new programming language I came away a better programmer. Different programming languages (and their standard libraries) suggest that you approach problems in different ways, and that has real cognitive benefits.

I was actually trying to think if there was ever a case where I had spent time learning a language (or library) that didn't pay some sort of dividend in terms of increasing my skill as a programmer.


The point about learning Haskell in particular is that it's a way different different language. It's a single language that gives you exposure to a lot of new ideas: purity, laziness, modern type systems, functors, monads, monoids, etc.


The massive difference though gives people like me, who've never done anything like Haskell before and do not properly comprehend it some real trouble though. It's similar to assembler. At first, you don't understand how the code works, how the logic is implemented, and everything just seems strange. Then you start looking at it as seperate commands to the processor and suddenly you understand. Sudenlly MOV r1,0x90 start: MOV A,#0x04 INC A DJNZ r1,start makes sense to you.

Sorry, I started fantasizing about Assembly again, my actual question was: "If you could find a metaphor that describes how Haskell works with Data, what'd be that metaphor?"


All of these exist or can be implemented in other major functional languages.


There's not much value in learning programming languages that are similar. If you learn Ruby, you already know pretty much everything there is to know about Python. If you learn Java, you already know much about C#. The only real value you can get out of learning new mainstream languages is from learning the architecture of popular libraries. Like when I was inspired a few years ago to do database migrations after I played around with Rails.

So you end up learning from the libraries written in those languages, not the languages themselves. It isn't a bad outcome, especially since learning a new language that's similar to something you already know becomes second nature and doesn't take too long. It takes me something like 2 weeks to learn a new language.

Haskell is on a whole new level though. I don't know Haskell, but in the last couple of months I've been building stuff with Scala and lots of functional programming related idioms, both for personal stuff and for our startup and I've learned more than I learned in the last 5 years. There's lots of good stuff to learn in the Scala ecosystem, like how to build rock solid pieces of functionality with referential transparency, how to work with Futures/Promise, how to do concurrency with shared-transactional memory, how to do non-blocking and safe I/O using Iteratees, how functional data-structures are designed and so on.

Of course, Scala is tainted by its hybrid and JVM-related nature. It's much too easy to cheat if it's easier than alternatives that are referentially transparent and even if your interface is pure, that interface is probably wrapping something which isn't pure. In practice it works great for building stuff, but unfortunately I'll have to take the plunge and learn Haskell to be able to take my knowledge to the next level.

One example where Scala falls short is in its implementation of lazy data structures. You see, even though Scala does have really good lazy data structures, it cheats here and there because it makes sense to do so ... like for instance a Stream is a lazy list and it does behave lazily, except when it doesn't. sort() on a lazy stream is actually implemented eagerly, which means algorithms using sort() will not have the complexity characteristics of lazy algorithms. There's nothing stopping one from implementing a lazy sort() in Scala, but it's probably not going to have any benefits so people don't do that. That's actually the reason why I want to learn Haskell, because the best implementations for lazy data-structures and algorithms are written in Haskell.

Many idioms that people use now in Scala or in other languages have been used in Haskell since years ago. Like Iteratees or type-classes.


I agree that there is pedagogical value in forcing yourself to write programs using only immutable data structures, but I don't think it's a restraint we should impose on ourselves for real world day-to-day programming. You aren't "cheating" if you've evaluated several ways of writing a program and you really think mutable local state is the best way to do it.

Also lazy data structures are built into many mainstream programming languages. Python generators are the first example that comes to mind. I think you are probably excited about Haskell because it incorporates lazy evaluation which is a separate concept.


Not only is that not cheating, it's actually completely reasonable in Haskell too! You can use ST to have as much mutable state as you like; the awesome thing is that the state is guaranteed not to leak--from the outside, the function is just as deterministic as one that doesn't use mutable references and data structures.

The big idea about Haskell isn't to avoid all state. Not at all. The big idea is to control state. Haskell expects you to use state if it makes the logic clearer or the code much faster.

What you do not do in Haskell is use state for program organization. You avoid global state as much as possible. This means that communication between disparate parts of your program are explicit--you don't have implicit dependencies everywhere. This is definitely a good thing.

I think one of the problems is that people without much experience in Haskell interpreter "purely functional" too strictly. It just means you do not have state by default; you can still use it where it makes sense.

The good thing about not having it by default is that it forces your program to be simpler and more decoupled and it lets the compiler and libraries make many more assumptions about your code. All sorts of awesome features like STM and data parallel Haskell depend on having this sort of control over side-effects. But that's all it is--control. You can still have state and side effects; however, unlike most other languages, you can also not have them.


On immutable data structures, a language like Scala makes it really practical to use only immutable data structures in daily programming. My own code is full of immutable data-structures, except in instances in which I have to process stuff returned by Java libraries and even in those cases I wrap them in Scala's immutable interfaces (note, I'm talking about Scala and not Haskell only because it's the language I know).

The only problem is that many times you need mutable references to immutable data-structures, with no easy way out and this is where you can still have problems with side-effects and multi-threading. But dealing with such references is easier, because (a) you tend to do it less and less often and (b) you can use STM or just plain atomic references or @volatiles, as in many instances you need to just ensure the visibility of updates and you only really need locks only on writes. Also, I tend to encapsulate and hide really well pieces of code that deal with mutability.

A good standard library of immutable data-structures is very practical for day to day use. I wasn't talking about immutable data-structures though, as the data-structures themselves are only a small part of the puzzle. Much more difficult is to deal with inter-modules communications, processing streams of data, I/O and stuff like that. And that's also where things break and where I feel the urge to look for better techniques.

On lazy data structures, Python is an awful example.


> I'm pretty sure every time I have learned a new programming language I came away a better programmer.

You might want to think a little bit about the fact that you are predisposed to believing that your efforts are not wasted.

If you spend the time to learn a new programming language, you're then predisposed to believing that it wasn't a waste of time, and are thus susceptible to confirmation bias, which causes you to ignore all evidence to the contrary.


I investigated Haskell for a while and as long as I didn't try to add any libs it was fun. However once I started using cabal to try and manage dependencies all hell broke loose. Obviously it takes a while to get it right (looking at you SBT and Ruby gems) but Haskell has been around forever.


I'm writing 3 different web apps, plus some command line tools in haskell. I have been working on these projects for a year now, and I've never seen any cabal problems that aren't typical problems that occur with any package manager (trying to install conflicting modules).

    $ ghc-pkg list | wc -l
         182


I felt that way up until last week! I somehow felt I was special, that cabal-hellfires had burnt a ring around me. Then next thing I knew I had a 4 page cabal conflict problem and had to just toast .ghc and .cabal and start over.

It's a real problem.


you could look into hsenv and cabal-dev.

https://github.com/cakesolutions/the-pragmatic-haskeller

http://bob.ippoli.to/archives/2013/01/11/getting-started-wit...

Also, problems in the haskell ecosystem don't stay problems for long, this one is the longer end of the tail

http://alpmestan.com/posts/2012-11-02-cabal-hackage-what-you...


I do use .hsenv a lot, but for simple playing around with libraries (where I'm also most likely to hit major version overlaps) I liked to keep installing packages into the user level database.


use Cabal-Dev!

Also the next major release of cabal should have a lot of the cabal-dev machinery baked in. Theres also active brain storming by GHC dev folks on how to make things better on the compiler side too.


Your response makes it sound like you just had a simple problem like you get with any language, and followed bad advice on solving it based on the assumption that "cabal is just bad and you have to do this". The big give away is that you didn't need to delete your .cabal directory, advice to do so is coming from someone who doesn't know the basics of how cabal works. It is just storing downloaded sources, all deleting it accomplishes is making you re-download stuff for no reason. The installed packages are in your .ghc directory. Cabal is no more broken than cpan.


You're right that I'm not very fluent on how Cabal works. It worked well enough that I didn't really need to until recently. I'm not claiming either that CPAN is better than Cabal. Could versioning be done better? Probably. I don't even claim to know enough to have a competent suggestion as to how, though.

Doesn't mean things don't break, though.


>Doesn't mean things don't break, though.

Right. The issue is that cabal has unfairly developed a reputation as being broken. People always say stuff like "every other language has this sorted out, why is haskell so bad at it?". When the reality is that the problems you get with cabal are the same ones you get with cpan or pip or anything else.


They're slightly exacerbated in that Cabal/GHC/Haskell type checks and thus has a higher likelihood of discovering mismatches.


Don't want to turn this into Stack Exchange and this may be homebrew's fault on OS X but I just tried installing 'haskell-platform' and got:

  Error:
  Building the QuickCheck-2.5.1.1 package failed
  make: *** [build.stamp] Error 2
Ok I will install the package on haskell.org, which instructs me to 'To upgrade, run: cabal install cabal-install':

  ...

  cabal: Error: some packages failed to install:
  cabal-install-1.16.0.2 failed during the building phase. The exception was:
  ExitFailure 1
Alright, maybe I don't really need the update, let's try yesod:

  cabal: Error: some packages failed to install:
  ReadArgs-1.2.1 depends on system-filepath-0.4.7 which failed to install.
  aeson-0.6.1.0 depends on hashable-1.2.0.5 which failed to install.
  asn1-data-0.7.1 failed during the building phase. The exception was:
  ExitFailure 1
  attoparsec-0.10.4.0 failed during the building phase. The exception was:
  ExitFailure 1
  attoparsec-conduit-1.0.0 depends on transformers-base-0.4.1 which failed to
  install.
  authenticate-1.3.2.6 depends on zlib-bindings-0.1.1.3 which failed to install.
  base64-conduit-1.0.0 depends on transformers-base-0.4.1 which failed to
  install.
  basic-prelude-0.3.4.0 depends on transformers-base-0.4.1 which failed to
  install.
  blaze-builder-0.3.1.0 failed during the building phase. The exception was:
  ExitFailure 1
  blaze-builder-conduit-1.0.0 depends on transformers-base-0.4.1 which failed to
  install.
  blaze-html-0.6.0.0 depends on blaze-builder-0.3.1.0 which failed to install.
  blaze-markup-0.5.1.4 depends on blaze-builder-0.3.1.0 which failed to install.
  case-insensitive-1.0 depends on hashable-1.2.0.5 which failed to install.
  certificate-1.3.5 depends on crypto-api-0.11 which failed to install.
  classy-prelude-0.5.3 depends on transformers-base-0.4.1 which failed to
  install.
  clientsession-0.8.1 depends on crypto-api-0.11 which failed to install.
  conduit-1.0.2 depends on transformers-base-0.4.1 which failed to install.
  cookie-0.4.0.1 depends on blaze-builder-0.3.1.0 which failed to install.
  cprng-aes-0.3.4 depends on crypto-api-0.11 which failed to install.
  crypto-api-0.11 failed during the building phase. The exception was:
  ExitFailure 1
  crypto-conduit-0.5.0 depends on transformers-base-0.4.1 which failed to
  install.
  crypto-pubkey-0.1.2 depends on crypto-api-0.11 which failed to install.
  cryptohash-0.8.3 depends on crypto-api-0.11 which failed to install.
  css-text-0.1.1 depends on attoparsec-0.10.4.0 which failed to install.
  email-validate-1.0.0 depends on attoparsec-0.10.4.0 which failed to install.
  failure-0.2.0.1 failed during the building phase. The exception was:
  ExitFailure 1
  fast-logger-0.3.1 depends on blaze-builder-0.3.1.0 which failed to install.
  filesystem-conduit-1.0.0 depends on transformers-base-0.4.1 which failed to
  install.
  fsnotify-0.0.6 depends on system-filepath-0.4.7 which failed to install.
  hamlet-1.1.6.3 depends on shakespeare-1.0.3.1 which failed to install.
  hashable-1.2.0.5 failed during the building phase. The exception was:
  ExitFailure 1
  hfsevents-0.1.3 failed during the building phase. The exception was:
  ExitFailure 1
  hjsmin-0.1.4.1 depends on blaze-builder-0.3.1.0 which failed to install.
  hspec-1.4.4 failed during the building phase. The exception was:
  ExitFailure 1
  html-conduit-1.1.0 depends on xml-types-0.3.3 which failed to install.
  http-conduit-1.9.0 depends on zlib-bindings-0.1.1.3 which failed to install.
  http-date-0.0.4 depends on attoparsec-0.10.4.0 which failed to install.
  http-reverse-proxy-0.1.1.3 depends on zlib-bindings-0.1.1.3 which failed to
  install.
  http-types-0.8.0 depends on hashable-1.2.0.5 which failed to install.
  language-javascript-0.5.7 depends on blaze-builder-0.3.1.0 which failed to
  install.
  lifted-base-0.2.0.2 depends on transformers-base-0.4.1 which failed to
  install.
  mime-mail-0.4.1.2 depends on blaze-builder-0.3.1.0 which failed to install.
  mime-types-0.1.0.3 failed during the building phase. The exception was:
  ExitFailure 1
  monad-control-0.3.1.4 depends on transformers-base-0.4.1 which failed to
  install.
  monad-logger-0.3.0.1 depends on transformers-base-0.4.1 which failed to
  install.
  network-conduit-1.0.0 depends on transformers-base-0.4.1 which failed to
  install.
  optparse-applicative-0.5.2.1 failed during the building phase. The exception
  was:
  ExitFailure 1
  path-pieces-0.1.2 failed during the building phase. The exception was:
  ExitFailure 1
  pem-0.1.2 depends on attoparsec-0.10.4.0 which failed to install.
  persistent-1.1.5.1 depends on transformers-base-0.4.1 which failed to install.
  persistent-template-1.1.2.4 depends on transformers-base-0.4.1 which failed to
  install.
  pool-conduit-0.1.1 depends on transformers-base-0.4.1 which failed to install.
  project-template-0.1.3 depends on transformers-base-0.4.1 which failed to
  install.
  publicsuffixlist-0.0.3 failed during the building phase. The exception was:
  ExitFailure 1
  pureMD5-2.1.2.1 depends on crypto-api-0.11 which failed to install.
  pwstore-fast-2.3 depends on crypto-api-0.11 which failed to install.
  resource-pool-0.2.1.1 depends on transformers-base-0.4.1 which failed to
  install.
  resourcet-0.4.5 depends on transformers-base-0.4.1 which failed to install.
  shakespeare-1.0.3.1 failed during the building phase. The exception was:
  ExitFailure 1
  shakespeare-css-1.0.3 depends on shakespeare-1.0.3.1 which failed to install.
  shakespeare-i18n-1.0.0.2 depends on shakespeare-1.0.3.1 which failed to
  install.
  shakespeare-js-1.1.2.1 depends on shakespeare-1.0.3.1 which failed to install.
  shakespeare-text-1.0.0.5 depends on shakespeare-1.0.3.1 which failed to
  install.
  simple-sendfile-0.2.11 failed during the building phase. The exception was:
  ExitFailure 1
  skein-0.1.0.12 depends on crypto-api-0.11 which failed to install.
  socks-0.5.0 failed during the building phase. The exception was:
  ExitFailure 1
  system-fileio-0.3.11 depends on system-filepath-0.4.7 which failed to install.
  system-filepath-0.4.7 failed during the building phase. The exception was:
  ExitFailure 1
  tagsoup-0.12.8 failed during the building phase. The exception was:
  ExitFailure 1
  tagstream-conduit-0.5.4 depends on transformers-base-0.4.1 which failed to
  install.
  tls-1.1.2 depends on crypto-api-0.11 which failed to install.
  tls-extra-0.6.1 depends on crypto-api-0.11 which failed to install.
  transformers-base-0.4.1 failed during the building phase. The exception was:
  ExitFailure 1
  unordered-containers-0.2.3.0 depends on hashable-1.2.0.5 which failed to
  install.
  vault-0.2.0.4 depends on hashable-1.2.0.5 which failed to install.
  wai-1.4.0 depends on transformers-base-0.4.1 which failed to install.
  wai-app-static-1.3.1.2 depends on transformers-base-0.4.1 which failed to
  install.
  wai-extra-1.3.2.4 depends on zlib-bindings-0.1.1.3 which failed to install.
  wai-logger-0.3.0 depends on transformers-base-0.4.1 which failed to install.
  wai-test-1.3.0.4 depends on transformers-base-0.4.1 which failed to install.
  warp-1.3.7.4 depends on transformers-base-0.4.1 which failed to install.
  xml-conduit-1.1.0.3 depends on xml-types-0.3.3 which failed to install.
  xml-types-0.3.3 failed during the building phase. The exception was:
  ExitFailure 1
  xss-sanitize-0.3.3 depends on tagsoup-0.12.8 which failed to install.
  yaml-0.8.2.3 depends on transformers-base-0.4.1 which failed to install.
  yesod-1.1.9.2 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-auth-1.1.5.3 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-core-1.1.8.2 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-default-1.1.3.2 depends on zlib-bindings-0.1.1.3 which failed to
  install.
  yesod-form-1.2.1.3 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-json-1.1.2.1 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-persistent-1.1.0.1 depends on zlib-bindings-0.1.1.3 which failed to
  install.
  yesod-platform-1.1.8 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-routes-1.1.2 depends on path-pieces-0.1.2 which failed to install.
  yesod-static-1.1.2.2 depends on zlib-bindings-0.1.1.3 which failed to install.
  yesod-test-0.3.5 depends on xml-types-0.3.3 which failed to install.
  zlib-bindings-0.1.1.3 failed during the building phase. The exception was:
  ExitFailure 1
  zlib-conduit-1.0.0 depends on zlib-bindings-0.1.1.3 which failed to install.
If I'm yak-shaving in my first hour of trying a language out it's not a good sign.


You have to scroll back to see the actual error. That could be anything from missing C libraries to a permission problem.


I'm sure it's fixable, it's just not the best first impression for those new to the ecosystem :) I love trying out new languages (prag pubs 7 languages in 7 days is great) but for production work I want to know that down the line if I need lib x, it won't be hard to find and even more importantly it will just work (or the googles machine will lead me in the right direction).

I actually stayed away from Scala for a while for this reason. SBT would blow up left and right. It's more mature now and with Play you can build performant web apps very quickly.


I do not think ghc/haskell-platform targets homebrew, as most programs/libraries do not right?

So some amount of leg work is bound to come up if you install many packages through homebrew.


My point is not "it is fixable". My point is you didn't look at the error and just decided "shit's broken". Your problem is with 99.999999% certainty a simple problem that is unrelated to cabal. The fact that it is possible to see an error does not indicate that there is some uncertainty about whether or not you will be able to install lib x.


I'm at the point where I have zero interest in learning new languages to "expand my mind" unless they have very obvious application to problems I am facing right now. For example, Matlab sucks, and scientific computing sucks in most other languages, so learning Julia is probably worth my time. What is Haskell's killer domain? Meta-programming C code seems more than a little esoteric for it to be worth the massive investment of time and opportunity cost of curling up with a Haskell book and a REPL in my free time. I spent a lot of time learning Clojure, which is an amazing language, but here I am years later and I still reach into my toolbox of imperative languages (C, Ruby, Matlab, Java) to solve any "real" problems quickly, efficiently, and in the most straightforward (and understandable by mere mortals) way.

If I want to expand my mind (towards the end of building software) I will fire up Coursera and study math or algorithms, things which are, imho, more widely applicable, timeless, and have a "bigger bang for the buck" for self-study than learning another programming language.


Algebras? I learned some basic Haskell: the type system and algebraic data types.

When I went to study group theory, I found it came easily. Now I'm working my way towards lie algebra. The cool thing about Haskell is for any particular algebra, there's often a good tutorial or literate Haskell implementation. Picking the right algebra for the problem domain can save you a lot of computation, but once you understand how it works you're always free to drop back to an imperative language to do the computation. Haskell makes it easy to play around and discover deep relationships, but it does require being an a mind-expanding mood to get to a comfortable place.


That complex function will turn out to be not all that complex at all; it can be written as just three simple functions composed together! -- Why this requires to learn Haskell, instead of ML or Scheme or Erlang?


Those are all great languages. Haskell by no means has a monopoly on mind expanding features. But I think that the enforced purity of Haskell along with laziness and the powerful type system is a particularly brain twisting mix.


I believe this too. Presuming that functional programming is a good thing, Haskell is a great way to learn it. It will force you to operate in a functional way all the time, thus speeding up your learning process.


The skill of breaking down large functions into smaller simple functions is a common skill of programming and should be practiced in all languages (C, Java, Python, etc). I'm not really sure from the article what I am supposed to gain by learning Haskell.

I'm not debating the opinion that Haskell is great language to learn, but the benefits of learning the language were not clear to me after reading the article. Does anyone care to elaborate?


"The skill of breaking down large functions into smaller simple functions is a common skill of programming and should be practiced in all languages (C, Java, Python, etc)"

Haskell will show you that when you thought you broke your code down into small functions in those languages, you were wrong in ways you currently can't even see and they're still shot through with duplication and mixing of concerns.

This is really the core of the "it changes how you program"; the environments you mention are so full of conflation of IO and mutation and state that you don't even see it, because it's just the way it is. Haskell will ungently pry those apart for you, if you stick around long enough to actually become proficient. You can take this back into other programming languages where this separation is not enforced, but I'm not convinced you can learn it where it is not enforced. The feedback cycle is so direct in Haskell ("Type Error: ...") and so diffuse in other languages (two months later "Aw crap, I should have isolated IO from non-IO better...").

(Oh you think that last parenthetical is a joke? Ha! I wish.)


Thank you (and jochu and pilgrim689) for elaborating on this. I hope to take the plunge at some point and learn Haskell! It seems like an interesting perspective on programming.


Learning Haskell taught me that mutation and state are good things, present both in the fundamental nature of computers and the real world. Haskell's attempts to eliminate state entirely results in a profound increase in complexity.

For me, learning Haskell was like visiting a third-world country: enlightening, very different, but ultimately I leave feeling very grateful that I'm not a permanent resident. All in all, I would recommend the experience.


So obviously you were not very enlightened because it makes no such attempts.


no state in haskell? not in my haskell. state is usually made explicit, yes. there is a state monad because of a reason...


It doesn't sound like you actually did learn haskell. Haskell does not in any way make any attempt to eliminate state, and the idea that it does is absurd. Why would the language ship with a module called "State" if it were trying to eliminate state?


Substitute "treat like a pariah" for "eliminate" and you're good to go!


[deleted]


(define compose (f g) (lambda (x) (f (g x))))

What if f is a function of n-arity

(define compose (f g) (lambda (x) (f . (g x))))

If g is n-arity

(define compose (f g) (lambda (x) (f (g . x))))

I gave them all the same name because naming things is hard.


I'm no expert, but I think Haskell is the most practical of the non-practical languages.


I am no Haskell expert, so I might be wrong on this, but is that code right ? I would imagine the fourth line should read

    isIfOfInterest (CIf cond _ _ _) = not (null (listify isFooIdent cond))
not

    isIfOfInterest (CIf cond _ _ _) = not (null (listify fooIdent cond))


You are correct. The nice thing is that this mistake would have been caught compile-time.


The bad thing is that people still write blog posts containing uncompiled code.


You mean, "identifier not found"... ?


Exactly. Ever tried running PHP, Ruby or Python code with an undefined variable somewhere? It works flawlessly... for a minute... an hour... or a month... and then it suddenly hits you out of the blue.


Relevant points on Haskell, from another discussion, and sort of echoes my own thoughts:

http://news.ycombinator.com/item?id=5314701

http://news.ycombinator.com/item?id=5314507


At university the very first language we are taught is Haskell. Lots of people had never programmed before, in anything and everyone managed fine. Nothing scary at all, no need for an understanding of category theory or anything.

Monads are (fairly) simple things that just have a really bad reputation, partly the fault of the haskell docs which make them seem complex.

Firstly:

Functors are things we can map over in a logical way; thats it. Monads are just a fancy way of saying "side-effects might happen here" (obviously not in all cases, but its fair enough for most uses - things like the Maybe Monad are not side effects)


I know monads are supposed to be simple, but I've been playing with Haskell on and off for years now and I still can't figure out how do-notation desugars into >>= operations.


It's not intuitive, but it's straightforward once you understand the semantics. There are basically two cases. Case 1:

    x <- monadicFunctionA
    monadicFunctionB
is rewritten as:

    monadicFunctionA >>= \x -> monadicFunctionB
The important thing here is that x is just the variable name in an lambda expression that you don't see. Case 2:

    monadicFunctionA
    monadicFunctionB
is rewritten as:

    monadicFunctionA >> monadicFunctionB
Where >> === >>= \_ ... . Given these rules, you can approximate do-notation like so:

    monadicFunctionA >>=
    \x -> monadicFunctionB >>
    monadicFunctionC ...


    do
      a
      rest
desugars to

    a >>
    do
      rest
and

    do
      x <- a
      rest
desugars to

    a >>= \x ->
    do
      rest


Monads are fairly easily expressed in mathematics but trying to explain them conversationally appears to be an exercise in futility. There are dozens or hundreds of these explanations on the Internet and haven't seen one that hasn't been declared so wrong by the ones-who-know. (edit: and I'm not one-who-knows but I'm pretty sure "side effects might happen here" falls in this category). Classes, objects, functions, and so forth don't have these issues. There are nitpicks and minor variances between how different languages conceive these, but all seem to be fairly easy to explain and understand.


I don't know, my dinner table explanation of monads has always been that they are DSLs. Even the Gentle Introduction offers this viewpoint (it develops a DSL in Haskell, as a monad, for writing resourceful computations, that is, computations which consume some user-definable resource at each step and automatically suspend upon consumption of all available resources).

After all, one of the philosophies of software development is that you're essentially building up your application as a layer of languages, each of which uses the constructs of the language beneath it as its own primitives, until at the highest level you have what is nothing more or less than a language built from the ground up to express the actions of your application in the simplest terms.

You can write a lot of Haskell software without ever delving much more into monads than what I just wrote. It's really that easy. Monad transformers also tend to scare people, but that is silly. A monad transformer just takes the DSL that you've already made with your current monad and it adds in new language features and functionality for you automatically. That's all.


Classes and objects don't have this issue because there is no underlying formal definition for them. They're also less general. Most descriptions of basic OOP ideas are on the same level as "side effects might happen here"; there's just nobody to correct them and nothing to correct them against.

Don't get me started about pointers. They're at least as difficult to grasp for most beginners than monads ever were. Hell, people have days of trouble just with references as in Java!

Functions do have this issue, and most of what different languages call "functions" aren't. It's just that unlike monads, functions have entered the common mind and people talking about them informally far outnumber those interested in correctness. And many people have plenty of problems with things like higher-order functions.


I'm curious what school you went to, there can't be a lot of universities that teach Haskell?


University of Texas at Austin teaches Haskell to 2nd year students in a required course: CS 337 - Theory in Programming Practice.

Programming Languages is also taught in Haskell, but it's an optional course and languages varies by professor.


Imperial College London, I believe the Oxford also teach Haskell to first year CS students.


Glasgow University teaches Haskell to 2nd year students.


University of Bristol teaches Haskell in the first year (along with C, Java and a Prolog varient called Frill) if you're doing CS or CSE (Computer Systems Engineering, four year mix of CS and EE).

I wouldn't be surprised if Oxford and Cambridge both teach it as well since last I looked (well over a decade ago now) their CS courses had a heavy theoretical slant and Haskell and the Lambda calculus go together well!

I'd imagine that the likelihood of being taught Haskell at university correlates well with the university treating CS as an offshoot of it's maths department.


I will learn Haskell if you people will stop spamming me about it.


I dunno, even on here it gets less press than Python, Ruby or even Go. It's just since most people already know imperative programming well, the Python/Ruby/Go articles that get upvoted tend to be a bit more advanced.

Will you stop using Python? Or would you not learn Python?


i don't read any page about ruby. why do you read everything about haskell?


Does FP really help reasoning about programs in a better way? I have a hard time believing that, since what ever logical errors I do in programs can easily manifest themselves in FP programs as well.

Having tried a little Haskell myself, the same bugs that I have in imperative code also appear in functional code, albeit with a different form.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: