A Detour
Last night I started reading the Okasaki book. As I said before, all the examples are in ML, a programming language I've never bothered to learn. Naturally I would have prefered Scheme, but since Scheme itself isn't purely functional, it was probably better to go with a language that was. Naturally, fully functional code can be written in Scheme. But Scheme technically does give you the ability to clobber variable values with
set!
, so there are ways to make it behave like an imperative language.I made it 11 pages into Okasaki's book when the first exercise showed up, and I wanted to hack it. So naturally I was obligated to close the book, download, install, and learn ML!
So far it's been lots of fun. I'm using this online textbook to get a feel for the language. I'm currently on the chapter about Curried functions.
I have to say that I prefer Scheme's parenthesis-based syntax to the definition-based syntax that ML and Haskell use. But I can see how people get addicted to this syntax. I think it is ultimately more appropriate to "functional" programming as it forces you to think of everything in terms of definitions, which, after all, is the point. Functional programming is meant to be the programming equivalent of plugging things into equations. Programming math: you write out your problem, and the program substitutes things in stepwise until it reaches simplest form - and that's the output. Scheme almost works like that ... well, actually I should say that Scheme does work like that, but that the parentheses make it a bit less obvious. It's very clear in ML and Haskell - where programs really are written like equations.
For example - here's the first program I wrote in ML last night - Okasaki's excercise 2.1:
fun suffixes nil = [[]]
| suffixes(x::xs) = (x::xs)::suffixes(xs);
Which takes a list and returns all the tails in decreasing order of length,
suffixes[1,2,3,4] = [[1,2,3,4],[2,3,4],[3,4],[4],[]]
The mathematical style of the definition is well-emphasized by ML syntax.
The first line:
fun suffixes nil = [[]]
tells ML that we're defining a function called
suffixes
, and that the suffixes
of nil
is an empty list inside a list (ML would also understand nil::nil
here - the ::
operator makes the item on the left-hand side(LHS) the first element in the list on the RHS). So it's a simple equation. It just says - if you get a call that looks like suffixes nil
, then just replace that with nil::nil
and you're done.The next line:
| suffixes(x::xs) = (x::xs)::suffixes(xs);
is really just a continuation of the first line, written here on another line to make it easier on human eyes. The bar at the beginning is indeed the logical or - it just says "or you might get a definition that looks like this:". And in this case it's suffixes that takes a list. (We know this is a list becuase all lists are formed in this way - one element, x here, is made the first element in the list xs. xs could be the empty list [], or it could be a preexisting list.) All this line says is that if you get a call to
suffixes
with a list as the argument, replace it with a line that makes the list you got the first element in a list - and that list is obtained by resolving suffixes(xs)
- or finding the meaning of suffixes applied to the list you got minus the first item. And of course this procedes stepwise. It first gets suffixes applied to a list - and so it takes that list and makes it the first item in a list that ... it doesn't have yet, so it resolves suffixes applied to a list again - this time the same list only minus the first element, etc. - and eventually there is nothing left for xs, so it returns [[]] and performs all the concatenation operations that have been waiting, getting the answer noted above.
In Scheme the same program would have been:
(define suffixes
(lambda (x)
(if (null? x)
'(())
(cons x (suffixes (cdr x))))))
Readers who don't know Scheme will need to know that
cons
is the same as ML ::
and that cdr
is the same as taking everything in a list except the first item. So it's (a) very easy to see that Scheme functions similarly to ML - they are both functional languages. But (b) with the Scheme syntax (which I personally prefer, but I'm willing to allow that that's just a function of familiarity) it looks more like a program. With ML syntax, it's closer to a simple math equation. And so I think there is a good argument that ML syntax is more appropriate to the functional programming style, which is, after all, supposed to be like doing math equations.
One thing that's a clear advantage to ML over Scheme is this pattern-matching approach. ML is able to notice things about the form of the argument that's passed. So, for example, it can notice that it got a null argument, or that it got an argument that is the result of list making - that is, is an item consed onto a another item. In Scheme this isn't as straightforward. We get an argument, and then we have to apply a series of tests to see what it is. First we have to explicitly check to see whether it's the empty list (
(null? x)
), and if it's not we just assume that it's a list. The effect is the same, but ML is more convenient.Pattern-matching can, of course, easily be added to Scheme (and in fact Friedman provides us with it in the form of a short series of macros called pmatch.ss, which we load into our homework assignments) using the
define-syntax
macros. So I guess the designers didn't see the need to include it; they try to keep Scheme minimalist. But I would personally argue that this is something so fundamental to functional programming that adding it wouldn't really break the philosophy at all. It would give Scheme an even more "functional" feel than it has now. I seem to remember some talk about adding pattern-matching to Scheme 6 (which will be out in about half a year), but I'm not completely sure and am too lazy to read through the R6RS proposal to find out for myself.Aside from sytnax differences, one other big difference between Scheme and ML is that ML simply doesn't allow you to clobber variable values. Scheme allows it but does its best to keep you from knowing about it. It's sort of like the dirty secret. You're strongly discouraged from using it - but Scheme does provide
set!
, which can change the value of a variable.In ML, the way I understand it so far, anyway, this simply isn't allowed. Once you define a variable, it stays that way for the duration of the program. This is one thing I don't like about it. Don't get me wrong, I understand the arguments for lexical scope and not changing variables, but I also like things that leave me a choice. Just because lexical scope is generally a good idea doesn't mean it always is. And the truth is, although programming without it is semantically dangerous, programming with it is often much much more efficient.
For example, I wrote a genetic algorithm in C++ this summer - and it was very fast because it constantly clobbered the values of variables. It basically just created one array for everything at program start, and then never copies anything for the rest of the execution. I had to be very careful about handling this, of course, but the efficiency comes from the fact that the program only consumes the amount of memory needed for this one array. If I had done this with functional style - having to recreate the array every time something changed, the program might still be running. Either that or it would have quickly run out of memory. Sometimes you need to clobber values for efficiency, and a good programming language should allow you to do so. I could have written my GA in Scheme, but it's not too clear to me that it would have worked in ML (though, I'm told that ML has the best compiler - it's described as "aggressively optimizing" - so it may be that the compiler is able to perform this optimization for you, in which case writing it in ML would have saved me a LOT of thinking time checking to make sure that everything worked out, etc.). Now, of course there is a mechanical way to transform programs that have purely lexical scope into programs that clobber variables. And the argument that Friedman makes for functional programming is that since this mechanical method is available, we can just have the machine do that. Humans should stick to their functional programming because this is efficient for humans. The machine can do the optimizing. And it's really hard to argue with that. And indeed, maybe the ML compiler does all that for you so that you don't have to. I should really, when I have time (the gradschool refrain!) write the GA in ML and see how fast it runs. Certainly the code would be easier to read, and I could probably finish it in an afternoon rather than the 2.5 days it took me to write it in C++.
But until I know for sure that the ML compiler always performs these kinds of transformations, then it seems to me that it's better to leave the user at least the ability to clobber variable values if he wants to. The same way I wholly approve of leaving GOTO in C/C++. It's still there, but they make it hard for you to use it (you have to load a separate module, so most programmers probably don't know it's there). Of course, prodigious use of goto statements makes a program unreadable and prone to side effects. So they should be avoided. But it's one thing to give people good advice and quite another to enforce the good advice. C/C++ is right to leave the option open, I think, however unfashionable this is in programming language design these days. And I would make a similar argument for Scheme here. Lexical scope may be a good idea, but that's really a decision for the programmer and not the language.
To be fair, though, I've only been learning ML since last night. Maybe there is a sneaky, hidden way to clobber variables in ML too.
There isn't really a point to this post except to say that ML is cool, and I'm having fun playing with it. I still prefer Scheme, but I'm also aware of ways in which ML is a bit more intuitive.
One thing that learning ML helped me do was define a function in Scheme that makes my Church numbers automatically. I was having trouble doing that earlier - but reading something in the ML book made it click. Here's the Scheme code that works:
(define make-church-number
(lambda (n)
(lambda (z)
(lambda (s)
(((make-cn n) z) s)))))
(define make-cn
(lambda (n)
(if (zero? n)
(lambda (x) (lambda (f) x))
(lambda (x) (lambda (f) (f (((make-cn (- n 1)) x) f)))))))
Probably there is a more efficient way to do this, so I may be embarrassing myself by publishing it, but this also works. So now I can easily test my addition and multiplication functions on arbitrarily large numbers. Though I won't - becuase I happen to know that it would take a long time for the program to complete for anything much over 100.
0 Comments:
Post a Comment
<< Home