Recently a question by Chris Done on Reddit has spawned yet another debate on the subject of whether Haskell’s laziness is actually a good thing.
With this post I’m not going to state my take on the matter, instead I’ll speculate on what Haskell could be like were it a strict language and how it would approach the standard problems.
Before we go on, please note that all the following code will go with an implication that Haskell is strict.
Let’s take a look at the
bool function, which essentially implements exactly the same thing as the “if-then-else” block:
Of course the first question coming to mind is about dealing with the evaluation of alternative branches, because we don’t want to waste resources on evaluation of a failed branch.
But look at this:
In other words, all we need to do is just to defer the evaluation of each branch using a function on a unit.
Already the type
() -> a seems like a concept of its own, so we can as well give it a name:
Next comes a thought “It suspiciously looks like something that could be a Monad”. Yes it does and yes it is! I would have “newtyped” the thing and declared the instances, but there’s actually no need, since there already are instances of
Functor for a type
((->) r), of which
Deferred is just a special case.
Deferred the intent of our condition function becomes clearer:
Feels like we’re coming closer to the laziness. But we’re yet lacking the preservation of already computed results. I mean, we’re in a pure language after all, so a specific function of type
() -> a is guaranteed to always produce the same result, so wouldn’t it be enough to compute it once and then just return a stored result? And that’s exactly the thing that Haskell’s Thunk is meant to solve.
Thunk is a basic mechanism that drives Haskell’s laziness. The internet is filled with excellent tutorials about it, if you don’t know about it yet. Following is how we could implement it explicitly in a library if Haskell were strict:
This datatype evidently forms a
Functor, but I’ll let your imagination figure the instances out.
Now, having thunks at hand we can easily implement any lazy data-structure. For instance, here is the lazy stream, which we call “list”:
Looking at it we can easily extract a general piece, turning it into a more general List Monad Transformer:
Which after refactoring becomes just
Thunk forms a monad, we get a lazy pure stream for free:
As well as the strict linked list:
Thoughts on compiler extensions
There’s also an extra thing to consider: a compiler could implement automatic function memoization, or make a special case for functions of type
() -> a, turning them into implicit thunks, such as the ones Haskell has already. Then there’d be no need for the explicit
Thunk type facade. Whether that would be a good thing though is a subject for another debate.
Looking at the above I see a code which has clear semantics about when and whether things get evaluated - it’s all there in the type signatures. I also see the language’s magical constructs implemented as libraries. Say what you will, but I would love to see that in an actual language.