Friday, February 8, 2008

Ten Years of Purely Functional Data Structures

In 1998, I published a book called Purely Functional Data Structures. Ten years later, the book is still selling well. (Every time I get a royalty check, my wife the skeptic says “People are still buying that?!”) The ten-year anniversary seems like a good time to reflect back on my experience with this book.

Origins

My first introduction to functional programming was at Carnegie Mellon University, where I worked for Peter Lee and Phil Koopman on an implementation of a subset of Haskell (called “Eddie”), written in Standard ML. So right from the very beginning, I was working with a mix of strict and lazy evaluation.

I've always been fascinated by data structures, and I soon realized that something was strange about data structures in these two languages. On the one hand, when they worked, data structures were more pleasant to work with than in any other languages I had ever used before. On the other hand, sometimes they just didn't work.

The issue was immutability. In pure functional programming, all data structures are immutable, meaning that they cannot be changed once created. Of course, data structures frequently need to be changed, so what happens is that you create a new copy of the data structure that incorporates the change, without actually modifying the old copy. This can often be done efficiently by having the new structure share large chunks of the old one. For example, stacks are trivial to implement this way. Tree-like structures such as binary search trees and many kinds of priority queues are almost as easy.

However, some common data structures, such as FIFO queues and arrays, can actually be quite difficult to implement this way. Some people take this as a sign that functional languages are impractical. However, it's not the fault of functional programming per se, but rather of immutability. Immutable data structures are just as awkward to implement in conventional languages such as Java. (Of course, immutable data structures also offer huge advantages, which is why, for example, strings are immutable in Java.)

In 1993, I ran across an interesting paper by Tyng-Ruey Chuang and Benjamin Goldberg, showing how to implement efficient purely functional queues and double-ended queues. Their result was quite nice, but rather complicated, so I started looking for a simpler approach. Eventually, I came up a much simpler approach based on lazy evaluation, leading to my first paper in this area. (When I say simpler here, I mean simpler to implement—the new structure was actually harder to analyze than Chuang and Goldberg's.)

Almost immediately after my paper on queues, I worked on a paper about functional arrays, here called random-access lists because they combined the extensibility of lists with the random access of arrays. (Because the lead time on journal papers is longer than the lead time for conferences, this paper actually appeared before the paper on queues, even though it was written later.) The approach I used, which turned out to be very similar to one used earlier by Gene Myers, was based on properties of an obscure number system called skew binary numbers. This marked the beginning of a second theme of my work, that of designing implementations of data structures in analogy to number systems. For example, inserting an element into a data structure is similar to incrementing a number, and merging two data structures is similar to adding two numbers. I called this approach numerical representations. The idea was to choose or create a number system with the properties you want (such as constant-time addition), and then build the data structure on top of that. The advantage of this approach is that you can often do the hard work in a simplified setting, where you don't have to worry about carrying around the actual data in your data structure.

Amusingly, this paper marked the second round of a circle dance between Rob Hoogerwoord, Tyng-Ruey Chuang, and me. First, Hoogerwoord published a paper on functional queues, then Chuang, then me. And again with functional arrays, first Hoogerwoord published a paper, then Chuang, then me.

Around this time, I was looking for a thesis topic (that period that John Reynolds describes as “making even the best student look bad”). Fortunately, my advisor Peter Lee had patience with me during this search. I considered a bunch of topics related to functional programming, but none of them seemed right. I was enjoying the work I was doing with functional data structures, but there didn't really seem to be enough meat there for a dissertation. That changed one day during a walk in the snow.

I had become fascinated by the idea of concatenation (appending two lists). This is trivial to do in conventional languages using, say, doubly-linked lists. But it's much harder to do efficiently with immutable lists. I thought that I had come up with a new approach, so I made an appointment to talk to Danny Sleator about it. He had been involved in some of the early work in this topic (along with Neil Sarnak and Bob Tarjan), and his office was just a few floors below mine. The day before the meeting, I realized that my approach was completely broken. Afraid of looking like an idiot, I went into a frenzy, playing with different variations, and a few hours later I came up with an approach again based on lazy evaluation. I was sure that this approach worked, but I had no idea how to prove it. (Again, this was a case where the implementation was simple, but the analysis was a bear.) My wife had our car somewhere else that day, so I ended up walking home. The walk usually took about 45 minutes, but it was snowing pretty hard, so it took about twice that long. The whole way home I thought about nothing but how to analyze my data structure. I knew it had something to do with amortization, but the usual techniques of amortization weren't working. About halfway home, I came up with the idea of using debits instead of credits, and by the time I got home the entire framework of how to analyze data structures involving lazy evaluation had crystallized in my head.

With this framework in mind, I now believed that there was enough meat there for a dissertation, and I sailed through my thesis proposal and, about a year later, my thesis defense.

The Book

After my defense, Peter Lee suggested that I try to publish my dissertation as a book. He contacted two publishers he had worked with before, and one of them, Lauren Cowles of Cambridge University Press, bit.

The process of turning my dissertation into a book was pure joy. I thought that the basic organization of my dissertation was pretty solid, so mostly I was able to focus on adding and adjusting things to make it work better as a book. For example, I no longer had the constraint from my dissertation of having to focus on original work, so I was free to add data structures that had been developed by other people.

The main additions were expanded introductory material (such as my simplification of red-black trees, which was developed a few weeks after my thesis defense in a series of emails with Richard Bird), exercises, and an appendix including all the source code in Haskell (the main text used source code in Standard ML).

After several months of hacking, I was able to ship off a giant Postscript file to Lauren Cowles, and that was that.

Well, not quite. Lauren asked me what I wanted for cover art. My wife was an avid quilter, and I liked the idea of having a suitably geeky quilt pattern on the cover. I adapted a quilt pattern called Flying Geese, and sent a sketch to Lauren, who got a graphic artist to draw up the final version. I was quite happy with it. Later, I found out that Flying geese was actually the name of a different pattern, and to this day, I don't know the correct name of the pattern. It looks a little different on the book than in quilts because quilters usually only use two levels of the recursion, but I used three.

Reactions

Initial reviews were positive and the hardcover edition sold out pretty quickly, so Cambridge University Press released a paperback edition the following year. I don't know of any courses that used it as the main textbook (it would have to be a pretty specialized course to do that), but several mainstream courses used it as a supplementary textbook.

Sales were steady but trending downward, until 2004, when a book review by Andrew Cooke appeared on Slashdot. After that, sales doubled and have stayed at that level for several years. Of course, I can't be sure this increase was because of Slashdot, but it seems likely. Thanks, Andrew!

The most interesting place the book has shown up was in the data of a TopCoder programming contest problem involving sorting books on a bookshelf. It was part of a list of CS textbooks. (And, no, I wasn't the author of the problem, although I have written some problems for TopCoder.)

The funniest moment about the book was when I worked at Columbia University. A potential grad student from an unrelated field in CS was visiting from Germany, and I was talking to him about the department. When I told him about my research, he said, “My roommate has been reading a book about something like that. I don't remember the name, but do you know which book I mean?” I reached over and pulled my book off the shelf and said, “Do you mean this one?” His mouth fell open.

The most flattering review of the book appeared a few years ago in a blog by José Antonio Ortega Ruiz, who said

“The exposition is extremely elegant and synthetic, and i’ve spent many, many hours immersed in its 220 pages (i can’t think of any other programming book with a better signal to noise ratio, really).”
Thanks, jao!

Complaints

Of course, you can't please everybody. The most common complaints I've heard are

  • No hash tables. This was mentioned in the first review of the book on Amazon, which called the lack of hash tables a “major omission, likely because they don't fit well into a functional environment.” More than a year later, a second reviewer mentioned that hash tables are in fact in the book, but buried in an exercise. Both reviewers are correct. I only gave a very short description of hash tables in an exercise, because there just wasn't that much to say. A hash table is basically the composition of two finite maps, one approximate and one exact. The choice of what structures to use for those finite maps are up to you. Typically, an array is used for the approximate map, but, as the first reviewer noted, arrays do not fit well into a functional environment.
  • No delete for red-black trees. I present my simplified form of red-black trees very early in the book as an introduction to functional data structures, but I only describe insertions, not deletions. There were two main reasons for that omission. First, deletions are much messier than insertions, and the introduction did not seem like the right place for a lot of messy details. Second, and more importantly, probably 95+% of uses of functional red-black trees don't need deletions anyway. Because the trees are immutable, when you want to delete something, you can simply revert back to a previous version of the tree from before the unwanted item was inserted. This is almost always both the easiest and the right thing to do. The exception is when you inserted something else that you want to keep after the item to be deleted, but this is extremely rare.
  • Why Standard ML? In the last 10 years, Haskell has become steadily more popular, while Standard ML has largely been replaced by OCaml. A frequent complaint has been people wishing that Haskell was in the main text, with Standard ML (or OCaml) in the appendix, instead of the reverse. Even at the time, I debated with myself about the choice of Standard ML, particularly because in some chapters I had to fake extensions to the language, such as lazy evaluation and polymorphic recursion. I went with Standard ML for two main reasons. First, I wanted to be able to distinguish between lazy evaluation and strict evaluation, which was much more difficult in Haskell. Second, I really wanted to be able to give module signatures for all the common structures (stacks, queues, etc.) that I was going to be revisiting over and over. You can sort of do this in Haskell, but not cleanly.
  • Too esoteric/impractical. Guilty as charged! I did try to point out a few data structures along the way that were especially practical, but that was not my main focus. If it was, I certainly would not have included umpteen different implementations of queues! Instead, I was trying to focus on design techniques, so that readers could use those techniques to design their own data structures. Many of the example data structures were included as illustrations of those design techniques, not because the examples were particularly practical or valuable by themselves.

The Future?

People sometimes ask me if I'm planning a second edition. The answer is maybe, but not anytime soon. I'm still pretty happy with what I said and how I said it, so I don't feel a burning need to go back and correct things. If I wrote a second edition, it would be to modernize the presentation and to add some new data structures and techniques. For example, I might include the finger trees of Ralf Hinze and Ross Paterson.

In terms of the presentation, however, the biggest question would be what language to use. Standard ML probably doesn't make sense any more, but I'm still not sure Haskell is ready to take over, for the same reasons mentioned above. I suppose I could switch to OCaml, but I'm not sure that's enough of a change to warrant a new edition. I have faith that the vast majority of my readers are perfectly capable of making the translation from Standard ML to OCaml themselves.

35 comments:

Unknown said...

This book is a gem.

Thank you for spending the time putting the material together.

sharvil said...

A few months ago, I started learning OCaml and got into a brief online discussion with a friend over immutable data structures. A few days later, I unexpectedly found a copy of your book on my doorstep courtesy of my friend.

I opened the present with delight and immediately dropped everything else I was doing to dive into this fantastic view of data structures. I have come across few books that have been such a delight to read. When I thanked my friend for such a delightful gift, he mentioned that he, too, owns a copy and was just as excited reading it as I am.

Thank you for such a concise, straightforward, and honest look at immutable data structures.

jefu said...

Your book has been on my bookshelf for about as long as it has been in print and I've read it, reread it and consulted it off and on.

A wonderful (and challenging) read that has taught me a lot. A book I recommend to my best students.

Thanks.

calvin said...

Haskell has recently become a lot more popular -- even within the last year -- and is poised to take a huge leap in popularity with the publishing of Real-World Haskell by O'Reilly.

A newer edition of your book in Haskell, with some additional data structures, would have a much larger audience than you currently have -- meaning lots more royalties, as well as having a much greater impact.

Please, please, pretty please!

Unknown said...

Your book is used as a textbook for the Functional Algorithms and Data Structures course at the Institute of Computer Science of the University of Wrocław, Poland. We're quite big on functional programming over there.

Sam Griffin said...

Huh, I just bought your book recently. I only really briefly skimmed through most of it, but it seems like the amortized costs of laziness could be quite interesting. Is this a field that could be mined for much more material?

Chris Okasaki said...

Mietek, thanks for the info. I'm curious, is this an undergraduate course or a graduate course?

sam, it depends on what you mean by "mining", whether you mean finding more that's been written about it or doing more original research in the area. If the former, then not a lot. There have been a number of papers that have used the technique to analyze a particular implementation. There's also been some recent work on partially automating such analyses.

If the latter, then sure, there's lots more that could be done! Practical time and space analyses that account for lazy evaluation are rather "thin on the ground" (to steal a phrase from my English friends).

Doug said...

First, thanks for the book, it's been a real education for me.

Second "... to this day, I don't know the correct name of the pattern".

If it's the pattern shown on this Amazon page, I think it's an approximation to a Sierpinski Triangle

Chris Okasaki said...

Douglas, it's definitely a close relative of the Sierpinski Triangle. I just wish I could find the name for the quilt pattern. Quilters always find much more colorful names than mathematicians. It's probably something like Swallows Returning to Capistrano or Ginkgo Leaves Falling or Staggered Flight of B-2 Bombers. Ok, maybe not the last one.

Geoff Knauth said...

I got here via LtU. I'm one of those who bought the book after the Andrew Cooke review. I had no idea the book was so old (grin)--the ideas were all pretty new to me. Thank you for making a difficult subject enjoyable to read.

Chris Okasaki said...

If you've ever wondered what this sort of thing would look like in a more mainstream language, Eric Lippert has just written part 11 in a huge series of posts about immutability in C#.

kHodel said...

I did some Google-digging, and it looks like the inspiration for the book cover image is a variation of a quilt pattern called "Birds in the Air" (though it may go by other names as well). Here are a few links

birdsintheair2.pdf, from how-to-quilt.com
flockofgeese.pdf, from how-to-quilt.com
cover of a birds in the air quilt book
the afore-mentioned book on amazon
picture of a birds in the air quilt on a bed

Chris Okasaki said...

kHodel, thank you! The second link lists Flying Geese as an alternate name for the pattern, so maybe that's why I was originally confused. (Flying Geese is usually the name for a more linear pattern of triangles.)

David said...

I've liked Purely Functional Data Structures for so long and for so much that I made it the subject of my first (and so far only) book review: Persistent Data Structures - now (possibly) practical.

I have just started on a Python implementation of the data structures. It would be a good language for a 2nd edition - popular, easy to read/understand (even if you don't know the language that well), and has the functional programming features needed (closures, polymorphism, GC).

s9 said...

Some of the more interesting algorithms (to me, at least) in Dr. Okasaki's book have fleshed out OCaml implementions in the Cf library of my hobby project on sourceforge.net. You can download here. There's also an impure, but still functional, persistent real-time, catenable deque that runs pretty fast. It was inspired by Dr. Okasaki's work on the more complicated purely functional variant.

yc said...

First let me say thank you for the book - it opened my eyes on what's possible.

Second - I imagine legions of developers like me will thank you for a fuller treatment of hash table, either in an updated edition or blogs.

The reason is - hash table is almost indispensible in many developers' day jobs since the internet revolution. The swiss army tool of data structures has shaped many developers' paradigm and it's simply difficult to see how things should and could be done otherwise.

While not everyone of them are interested to enhance their skills, a small portion of this group are interested to learn the FP way, and it's simply hard to go from one paradigm to the other directly without a good bridge.

A fuller functional hash table treatment, or a set of design principles to convert a imperative solution into a purely functional solutions would be *the* bridge for many of us who rely on hashtable every single day.

So - maybe academically there are not much more to say about hash table, practically there are a lot to learn for us who are on the other side of the canyon looking over ;)

Alex Slesarenko said...

I agree with caracarn11 that your book can help a lot on a way form imperative to FP world.

Being an imperative C# programmer, I have always been interested in FP, so reсently I started to play with F#.
I found it very natural way to functional world for .Net programmer.

Luckily, F# is about to be next mainstream language.

I guess, F# is a language of a choice for Second Edition, since all examples can also be cross-compiled with OCaml.

George Colpitts said...

I'd love to see a second version. One of the languages should still be Haskell, maybe the other could be Scala?

I'd also like to see a section on graph data structures including a discussion of hybrid functional / imperative implementations.

Ning Ke said...

Is there an implementation of the lazy notations used in the book? (the $ and lazy/force keyword)

jsalvati said...

Do another version!

Ytzvan Mastino said...

If you'll do another version, I hope you use Scala instead :D

Anonymous said...

Hi,
it is really a wounderful book. And it is really nice to read why
you did not include the delete method for red-black-trees. Are
you aware that the standard library of SML/NJ had a bug in delete
operation that resulted in completely unbalanced red-black
trees (more a linear list :-)). This bug can be trigged even with
very small test cases, see, e.g., page 16 of "A.D. Brucker and
B. Wolff, On Theorem Prover-based Testing. Formal Aspects of
Computing. doi:10.1007/s00165-012-0222-y" for an example test
case that triggers the bug.

Chris Okasaki said...

@test4242: Interesting. I had not heard that about the SML/NJ library.

Anonymous said...

Chris, isn't it much more of a leap to pretend to have lazy evaluation extensions in ML or OCaml than it would be to use the real world facilities Haskell provides for specifying lazy vs. strict? Have you thought about contacting Simon Peyton Jones about some of your concerns to see what his thoughts are? I agree with some of the other commenters here that a Haskell edition would garner a much larger audience, again citing the success of Real-World Haskell.

Unknown said...

why not LISP? a canonical language survives decades.

STARRY said...

I have been reading this book recently. It is really an insightful book. Thank you very much, Dr. Okasaki. Is the solutions to the exercises in the book available for download?

Chris Okasaki said...

STARRY: No. However, many people have posted solutions to some of the problems.

Unknown said...

I am trying to get through your book, re-implementing the examples and doing the exercises in Scala. As I’m doing that, I’ve encountered a typo which is most probably already known to you — but if it’s not (and since the books is seemingly printed on demand theses days), it won’t hurt sending it through:

On page 133, the source of lookupTree contains the following clause:

if i < w div 2 then lookupTree(w div 2, i - 1, t_1)
else lookupTree(w div 2, i - 1 - w div 2, t_2)

It’s easy to see that since the index in the tree is supposed to be from 0 to w - 1 (inclusive), the initial comparison must be i <= w div 2, lest the logic falls apart.

The same typo repeats in Figure 9.7 on page 134

Hope it helps — and thank you for the great book!

Chris Okasaki said...

Alexey Filippov: Thanks. That does appear to be < on page 133, but the version on page 134 shows the <=.

said...
This comment has been removed by the author.
said...

What's new in purely functional data structures since Okasaki?
http://cstheory.stackexchange.com/q/1539/

Hyunsok Oh said...

Hi,

I found a errata in your book page 145. in the function update (match clause about ZERO) was written:

fun update (i, e, ZERO ps) =
let val (x, y) = lookup (i div 2, ps)
val p = if i mod 2 = 0 then (e, y) else (x, e)
in ZERO (update (i - 1, p, ps)) end <------------- here

the i-1 should be i div 2 , which I can confirm by making a scala code, and/also by checking the fupdate in the next page.

Regards,

Frank Hyunsok Oh

Jeff Kayser said...

Hi, Chris.

First of all, Thank you for writing the book! It is important. I want to work through it, just to learn the concepts.

I downloaded and installed sml/nj on my Fedora VM and tried to enter the signature for a stack (Figure 2.1). I didn't know what to substitute for the alpha character, so I just substituted the word "alpha":

$ sml
fgrep: warning: fgrep is obsolescent; using grep -F
Standard ML of New Jersey (64-bit) v110.99.4 [built: Sat Sep 09 12:13:22 2023]
- signature STACK =
= sig
= type alpha Stack
= val empty : alpha Stack
stdIn:3.6-4.4 Error: syntax error: deleting IDA IDA VAL
-

Some questions:
1) How do I resolve that error?
2) There are some non-ASCII characters in the book. Example alpha, arrows, etc. What should I substitute for them when typing the code into the compiler?
3) Should I be using a different SML compiler to work through this book?
4) Should I work through the book using OCaml instead?

Thanks for your help.

Chris Okasaki said...

Jeff,

The type variables in SML are often written as Greek letters in typeset documents, but in ASCII code they would be written as 'a (instead of alpha), 'b (instead of beta), etc. Arrows are written as -> (in types) or => (in lambdas).

Jeff Kayser said...

Hi, Chris.

Thank you! That helps. Sorry for the newbie question. Thank you for your answer.