If you're coming from an imperative programming language background, then when you program a solution to a problem, you might first try solving the problem the way you used to in an imperative language, because it was the only practical way, instead of thinking in terms of the problem statement, which may be practical in Haskell, and usually is the clearest way of expressing the solution.
I have some interesting examples here. Note that, in these examples, the code I show is not necessarily the best way to solve the particular problem. These are just vehicles for demonstrating some interesting techniques and ways to think about the solution. The best solution depends on your time and memory constraints, what compiler or interpreter you're using, and what sort of tuning you do.
In case you're extracting all the code in this file and running it, you'll want this:
> import Char > import Data.Bits
Let's say you're traversing a tree and building a histogram of the contents of some field in the leaves of the tree. With an imperative background you might assume you need to start with an empty histogram, then traverse the tree and add to the bins of the histogram as you go. If you used an inefficient immutable data structure, then this could be too slow. If you used an efficient immutable data structure, it might be fast enough. If it isn't, then you could use a mutable data structure in a monad.
But all of the solutions above are in terms of passing some data structure around while traversing the tree, collecting information for the histogram. I don't think that's as near to the problem statement as you could get. Consider the following solution:
> data Tree = Branch Tree Tree | Leaf Char > > countLetters tree = map (countLetter tree) ['a'..'z'] > > countLetter tree c = case tree of > Branch tr1 tr2 -> > (countLetter tr1 c) + (countLetter tr2 c) > Leaf c1 -> > if c1 == c then 1 else 0
What I like about this solution is that the definition of countLetters is a histogram (map) of the number of occurrences in the tree (countLetter tree) for each letter (['a'..'z']). To me that feels about as close as you can get to the original problem. As for efficiency, it avoids the mutable/immutable data structure problem by just defining the sum of one particular letter (countLetter), which requires no copying.
With an imperative background you might have trouble thinking of a solution like this because it's so inefficient in most (all?) imperative languages. You're asking for 26 traversals of the tree, and in most languages that's what you'll get. But a pure, functional language like Haskell is more likely to do aggressive optimization, primarily because it doesn't have side-effects to get in the way. So you get the advantage of describing the problem in a straightforward way, and the compiler can refactor it to generate more efficient machine code. Note that I say "can" rather than "will". It depends on your compiler. It may or may not be clever enough to avoid multiple traversals of the tree. See the note on compiler cleverness.
The fibonacci series starts with a pair of 1's, then each of the rest is the sum of the previous two. Say your goal is to return the nth fibonacci number. With an imperative mindset, you'll probably think in terms of starting with those two 1's, and work your way up to the nth number. Perhaps like this:
> fibonacci1 0 = 1 > fibonacci1 1 = 1 > fibonacci1 n = fibonacci1' (n-2) 1 1 > fibonacci1' 0 a b = a+b > fibonacci1' n a b = fibonacci1' (n-1) b (a+b)
The solution above works well, but it has extra complication due to describing how to solve the problem, instead of just what the problem is. If you want to stay closer to the problem statement, then you might use this:
> fibonacci2 0 = 1 > fibonacci2 1 = 1 > fibonacci2 n = fibonacci2 (n-1) + fibonacci2 (n-2)
This is great! It's almost pure problem statement -- definition of the first two numbers in the series, and the recursive definition of each of the rest. The only problem with this is that it has an exponential explosion in it and bogs down your computer before n gets very big. If you ask for, say, fibonacci 9, it will calculate fibonacci 7 and fibonacci 8 and add them. But fibonacci 7 will require calculating fibonacci 5 and fibonacci 6, and fibonacci 8 will require fibonacci 6 and fibonacci 7. Look what's happening:
f9 f7 + f8 f5 + f6 + f6 + f7 f3 + f4 + f4 + f5 + f4 + f5 + f5 + f6 ...
To make this approach practical, you need memoization, which is remembering values that you've already calculated. A clever compiler could automatically do that for you, but you don't want that unless the compiler is very clever, because it has to know when memoization is going to help and when it's going to slow things down.
So you want to specify the memoization yourself. As with the histogram problem, you could try storing the calculated fibonacci numbers in an inefficient immutable data structure, which is probably too slow. Or you could use an efficient immutable data structure, which might be fast enough. Or you could use a monad and mutable data structure, which would be as fast as you're going to get, if you do it right.
You could also use a trick for the memoization, like this:
> fibonacci3 n = series3 !! n > where > series3 = 1 : 1 : map f [2..] > f n = fibonacci3 (n-2) + fibonacci3 (n-1)
This defines the fibonacci series, and defines (fibonacci n) as the nth element of the series. To define the series, it defines the first two elements of the series, then the rest in terms of the recursive fibonacci definition. Note that fibonacci3 is defined in terms of series3, which is defined in terms of f, which is defined in terms of fibonacci3. You're probably used to a function defined in terms of itself, but you may not be used to a data structure defined in terms of itself. The thing that makes it work is lazy evaluation. Haskell won't try to compute the whole series all at once. It will compute each element as it needs it. And each element after the first two is defined in terms of the previous two. It's essentially the same as fibonacci2, but you get memoization by recursing through a data structure.
Here's perhaps the sexiest solution:
> fibonacci4 n = s !! n > where s = 1 : 1 : zipWith (+) s (tail s)
In terms of description, fibonacci4 is the same as fibonacci3, except the part that defines each element as the sum of the previous two elements is done in fibonacci4 by adding two series using zipWith, instead of mapping the addition of individual elements, as in fibonacci3. Fibonacci4 is odd because it appears simpler than fibonacci3 because there's less text, but you could consider it to be more complicated because it's carrying out the recursive definition indirectly, through a zipWith of two series.
Suppose you want to convert a specified number of the least-significant bits of a number to a string. With the imperative mindset you might try this:
> intToBinaryString1 0 _ = "" > intToBinaryString1 bits n = rest ++ [char] > where > char = intToDigit $ n .&. 1 > rest = intToBinaryString1 (bits - 1) (n `shiftR` 1)
It's a standard recursive definition, generating one character at a time, and stopping when you've generated all of them. If adding characters onto the right with ++ bothers you then you can do it a more efficient way and reverse the whole thing at the end:
> intToBinaryString2 bits n = reverse $ intToBinaryString2' bits n > intToBinaryString2' 0 _ = "" > intToBinaryString2' bits n = char : rest > where > char = intToDigit $ n .&. 1 > rest = intToBinaryString2' (bits - 1) (n `shiftR` 1)
If either of the above had used the value of n as the terminating condition, then it would have gotten much messier because it needs to terminate on 0 or -1, and then it would have to add or subtract digits to get the right number of bits. That is, unless it did the following:
> intToBinaryString3 bits n = reverse $ take bits $ intToBinaryString3' n > intToBinaryString3' n = char : rest > where > char = intToDigit $ n .&. 1 > rest = intToBinaryString3' (n `shiftR` 1)
This one uses the trick of generating an infinite string, and then taking the number of digits needed. It doesn't need to worry about any termination condition at all.
All of the above are ok, but they really don't stick to the original problem statement. How about this:
> intToBinaryString4 bits n = map digit [bits-1,bits-2..0] > where digit = intToDigit . (.&.1) . (shiftR n)
This is a map that converts the right number of bits directly to Char's, in the correct order.
Again, these were not necessarily the best ways to solve the problems, depending on your design constraints and compiler or interpreter, but they're a good place to start, close to the original problem description, after which you can optimize only if necessary.
Immutable data structures that blindly copy everything they're not updating might be too inefficient for your application. But they don't have to do all that copying. There are various tricks to avoid or reduce the copying. I won't go into them here. I'll just point you at the following: Data.Array.Diff and Data.Map in the Hierarchical Libraries, and EdisonAPI and EdisonCore in hackage
Using the libraries above might save you from having to learn how it's done, but if your curiosity is piqued, check out Chris Okasaki's book, Purely Functional Data Structures, which you can find referenced here.
Mutable data structures have to be partitioned from the rest of your code in a monad. I won't go into any of that here. It's adequately documented elsewhere. I'll just say you should be familiar with these four, at the very least:
Some of the tricks above might be practical only if the compiler is clever enough to optimize the code enough. I want to draw your attention to two paragraphs from Phil Wadler's paper, "Monads for Functional Programming":
A number of researchers have proposed analyses that determine whether a given functional program uses an array in a single threaded manner, with the intent of incorporating such an analysis into an optimising compiler. Most of these analyses are based on abstract interpretation . Although there has been some success in this area, the analyses tend to be so expensive as to be intractable [2, 7].
Even if such analyses were practicable, their use may be unwise. Optimising update can affect a program's time and space usage by an order of magnitude or more. The programmer must be assured that such an optimisation will occur in order to know that the program will run adequately fast and within the available space. It may be better for the programmer to indicate explicitly that an array should be single threaded, rather than leave it to the vagaries of an optimising compiler.
He's concerned about whether or not the compiler should detect whether its necessary to make a copy of a data structure before modifying it. If it's used anywhere else, then of course you have to make a copy. But it can get difficult for the compiler to tell whether a data structure will be used again. So if you're depending on the compiler to detect it, and you make some changes such that the compiler no longer can detect it, your program can suddenly run far too slowly.
We can generalize that to any compiler cleverness. If you're doing something that depends on the compiler being particularly clever about optimization, you'd better be careful. Are you going to be making any changes? (Of course you are!) What if one of your changes pushes your code outside the range in which the compiler can optimize sufficiently? You're in trouble, is what. It's something to keep in mind.
In case you're extracting all the code in this file and running it, you'll want this:
> main = do > tree <- return aTree > print $ countLetters tree > print $ map fibonacci1 [0..5] > print $ map fibonacci2 [0..5] > print $ map fibonacci3 [0..5] > print $ map fibonacci4 [0..5] > list <- return [0, 1, 42, -1, -42] > print $ map (intToBinaryString1 9) list > print $ map (intToBinaryString2 9) list > print $ map (intToBinaryString3 9) list > print $ map (intToBinaryString4 9) list > > aTree = > (Branch > (Branch > (Leaf 'a') > (Branch > (Leaf 'a') > (Leaf 'b'))) > (Branch > (Leaf 'c') > (Leaf 'd')))
Comments to doug[at]DougAndJean[dot]com
Back to DougAndJean.com