Scheme Death Knell?
As a subject of minor annoyance, Lambda the Ultimate has a bit suggesting that someone translate Hofstadter's Scientific American columns introducing Scheme to a general audience into Haskell. Yet more evidence that Scheme is losing its "cool" factor.
The thing is, I don't think there's any good reason for it. There is nothing wrong with Scheme. I think the only actual reason that people switch to Haskell and so on is just to jump a train. Scheme's been plugging along since 1975, and people therefore think it's time for something new.
Why Haskell? Here are the commonly-cited reasons:
- Haskell is Purely Functional - apparently, Haskell is a purely functional programming language. Scheme has some impurities floating around herer and there. Well, wonderful. But I don't see the big deal. I mean, can it be a bad thing to allow some (limited) flexibility in the paradigm? I really do think this is going to be the hallmark of the next 5-10 years of PL design - this insistence on pure versions of various paradigms. And I also think it's silly. It's one thing for a language to support a particular paradigm and encourage its use, but to choose one paradigm exclusively is to deliberately exclude classes of users. But OK, ya'll have fun making your loop objects in Ruby, or doing monadic I/O in Haskell. Don't let me spoil the fun!
- Monads - not that anyone actually knows what these are, but Monads are the shit, let me tell you. So this is the functional programmer's OOP. Monads are supposed to be some kind of huge foundational paradigm shift, but all they really are is a way to introduce sequencing and side-effects into the "purely functional" programming languages. Which of course just illustrates the point above: there is no sense in having a purely functional programming language!. Now, actually, I'm being too harsh. Monads may not be strictly necessary, but the fact that they had to be discovered to make Haskell work properly has led to more than one insight (demonstrating that seemingly unrelated computations in fact have related forms) in computation theory. So Monads are useful. It's just that it's a bit embarrassing to watch the Haskell people go all glowy over something that started out - let's be honest - as a workaround. They're scaffolding for people who like to pretend that programs are proofs - that programming is just math. But programming is more than that and we all know it.
- Call-by-need - I can never put my finger on just why, but I think this is largely hype too. For those not in the know - "call-by-need" is one broad category of functional semantics a programming language can exhibit. The main ones are call-by-value, call-by-reference, call-by-name and call-by-need. The overwhelming majority of PLs use call-by-value. This means two things - broadly: (1) when you pass a value to a function parameter, the function will not change the value passed. It may perform operations on it using private variables, but it will not actually change the caller's value unless you ask it to. This is distinct from call-by-reference, where variable names all refer, thoughout a program, to the associated memory register. In call-by-refernece, when you name a variable inside a function x, it's the same x as any variables by that name outside the function. Parameters passed to functions are the same as the variables associated with them. Run-time efficient, but computationally confusing. (2) Arguments are evaluated before being passed to the function. This is the disctinction with call-by-name. In call-by-name, arguments are passed as they are to functions. So - to use one of Friedman's examples - the difference can be seen in the return value of this function:
fun x = x + x
Looks like a function that doubles its argument, right? And under call-by-value it is - becuasex
gets evaluated before it's passed. But with call-by-need it isn't necessarily. If we said:
x(random 10)
we might not get double anything - might get an odd number. And that's because in call-by-name what gets passed is (random 10) itself. So for eachx
in the body it is reevaluated. This seems silly, but it's actually very useful in some cases because it prevents code from being unnecessarily evaluated. In addition to sometimes being an efficiency gain (though not always), this has the huge advantage of minimizing chances of going into infinite loops.
Well, call-by-need is supposed to be the best of both worlds. It has the lazy evaluation of call-by-name, but once an argument has been evaluated it also remembers that value so that the semantics come out right - i.e. our example function is a doubling function for all arguments again.
I dunno - I can't prove it but I think there's something cheesy about this. It's another smug Haskell pretend solution - something just tells me so. On the documented side - it is true that evaluation order becomes a problem here. I won't give a motivating example because I think this is probably obvious from the example above. Now, evaluation order isn't always such a big deal in functional langages (especially, one presumes, in purely functional languages...), but it could still be annoying I think. But until I can come up with something better than "evaluation order problems," I think I'm going to have to admit that this feature of Haskell is indeed Very Cool. After all, evaluation order problems are less severe than the enhanced potential infinite loop problems that Scheme's call-by-value paradigm has.
So anyway - lots of the stuff that's supposedly cool about Haskell is...well, cool, but maybe not as cool as we're often led to believe. To all of the above I just want to say: I admit that call-by-need is probaby cool. Monads are cool too, but maybe not to the extent people wet their pants over them (and anyway they're supported in Scheme too). The purely functional stuff is crap and is actually an argument against Haskell, in my opinion. But then, I underestimate the extent to which some people are born with the functional programming gene.
In any case, I think Haskell is a fad, and one that has already peaked, at that. It's here to stay, don't get me wrong, but it definitely isn't the Next Big Thing - neither in the functional programming subculture nor in the real world of industry programming.
Which is why I get a little annoyed that so many Haskell hackers think Scheme should just step aside. This town's definitely big enough for both of us, thanks partner.
10 Comments:
I was once fluent in Scheme and ML; I'm currently learning Haskell. Why isn't Scheme my choice?
Static typing: I like a language that helps catch my silly mistakes.
Module system: Can't program without one. Can't believe Scheme doesn't have one.
Support for standalone utilities: With reasonable effort, I can build a Haskell executable that looks pretty much like any other Linux executable.
Powerful libraries and utilities: Spend my time on new stuff instead of re-inventing wheels.
I don't hate Scheme by any means. But in 2006 I prefer ML or Haskell in almost every case.
You addressed the reasons why some people might prefer Scheme to Haskell, but failed to address the specific use-case that the article referenced by LtU points to: which language is a better tool for introducing people to programming.
The author of that article explains several areas where Haskell seems to offer advantages over Scheme as an introductory tool (e.g. syntax, strong typing, pattern matching.) The disadvantages are well-known and no one is going to aim to cover monads in any great depth in an introductory course, but it is quite possible that "the ultimate teaching language" was not actually invented when Steele and Sussman came down from the mountain bearing a spool of 9-track magtape...
I'm a Scheme programmer by choice. In fact, I never use Haskell. Still, I can't help but point out a few flaws in your thinking.
Yes, Haskell is purely functional. There is a reason for this purity -- the people who developed Haskell were interested in it. Both Scheme and Haskell are the product of academia. A purely functional language has some advantages over an impure language: stronger invariants. Presumably, this means that the language can have better compilers (outside of whole-program compilers) and easier analysis by humans. But I'm not going to argue about how useful any of this is. After all, I use Scheme because most of my code is just throwaway code used to express an algorithm or idea. I don't need the formality.
With that being said, I have used monads in Scheme. In fact, continuation-passing style (used to implement backtracking, for example) is monadic. Monads are an inherently useful way to structure sequential code that has some state or effects threaded through it. It's easy to get distracted by category theory, just ignore that stuff unless you are interested in denotational semantics ;)
Call-by-need makes sense in a functional language like Haskell. Since all `impure' computations must be built using monads, there's no reason for call-by-value. And call-by-need (plus pattern matching, of course) lets recursive definitions be written in a style that closely matches their mathematical definition. Call-by-need is really just a technique for implementing call-by-name in a purely functional language; arguments to functions are memoized so that they are evaluated only once. Because the language is referentially transparent, this has the same semantics as call-by-name. In a referentially transparent language, (RANDOM 10) would yield the same value without regard for context. So fun x = x + x would ALWAYS double the value, since the argument expression always yields the same value.
But, all of that aside, I think I understand your underlying concern about Haskell. It doesn't matter whether somebody labels programming as an art, a science, or an engineering discipline. In the end, if it is a commercial project then you are building something with a budget, requirements (hopefully), and a deadline. Is Haskell the best choice for your project? Who knows; it all depends on the task at hand and the technical abilities of the team. After all, good work can be done in Haskell. And good work can be done in Scheme.
So somebody wants to translate an article to a different language. Why so defensive? Can it be a bad thing to give people a choice in learning programming languages?
You are certainly, deeply misguided on your assesment of semantics. First, the body of work on cbv, cbn and friends is enormous, I mean, ENORMOUS and dates as far as 60's. It is extremely formal, many times elegant and often very enlightening. That topic is still extensively researched these days and new, important results pop all of the time (see for example recent Wadler's work: http://homepages.inf.ed.ac.uk/wadler/topics/call-by-need.html).
There is *absolutely* totally nothing cheesy in the call-by-name, or any other formal semantics of this sort. Call-by-name and call-by-need have their proper, important place in the PL semantics. Unless you want to overthrow decades worth of deep research which evidently you didn't even skim.
So, it looks like you need to get up to basic facts and even terminology of PL semantics. Any introductory text on lambda calculus will probably do. Also, don't forget to read the uber-influential paper by Hughes: "Why functional programming matters?" (http://www.math.chalmers.se/~rjmh/Papers/whyfp.html). I am sure you will "catch" it why call-by-name is actually useful and important.
Cross my fingers on your making these things clear to yourself. Don't preach until you do :)
It sounds like you haven't really given Haskell a good chance. Try it for a while and you might see the benefits of nonstrict semantics, referential transparency, monads, static typing, etc.
Static typing is a really important one. When I write programs in scheme or lisp, I find myself really annoyed with the bugs that end up in my programs which I feel should have been caught by the compiler.
I'd rather find my bugs early than end up looking at my output and saying "Well, that's wrong, I wonder why?" and having to trace through things in order to find something that would have been a type error.
Monads are very cool, and I would think that anyone who loves scheme would really love monads as well. Basically, monads are just a really good way to structure combinator libraries for embedded domain specific languages. As a result of the abstraction, you can write code which will run in every monad (usually control-structure-like things), or a wide class of monads, and accomplish related, but different tasks. I was actually thinking about writing a scheme which handled IO and other side effects monadically and had a similar do-syntax to Haskell's.
Laziness/non-strict semantics is good for composability, which is something that you should know if you watched the SICP lectures. :) It's not just simple stream-processing that it's good for either (though that does capture the general idea rather well). It helps you decompose problems in ways that you just can't do in a strict language.
Referential transparency is something that even most scheme programmers value. Certainly when I learned scheme in university, we were required to write our programs in a maximally referentially-transparent way. It helps with debugging, it helps with program correctness, and just plain thinking about the code, it's just a good idea.
One of the things that really stood out for me while watching the SICP lectures recently was the fact that the discipline used in constructing datastructures is basically exactly the same one imposed by ML and Haskell's algebraic datatypes.
Anyway, I think you'd get more of the idea if you'd actually write some code in Haskell. I don't think anyone's saying that scheme is bad, it's just that Haskell is pretty damn cool, and you should really check it out. :)
Don't listen to these guys. Especially the ones saying, "Basically, monads are just XYZ." No such XYZ exists, I promise.
You are right: there is a problem with not evaluating arguments until the value is needed. Well, two problems. First, because the order in which things are evaluated is not the order things appear in source code, it's hard to imagine a debugger that could possibly be as sweet as debuggers in procedural languages.
More importantly, the unevaluated arguments take up memory. Thus a function like
f x = x + 1
is a ticking time bomb in Haskell. It's only a matter of time until someone calls that sucker in a loop and consumes all your memory with unevaluated thunks that say "((((...) + 1) + 1) + 1)".
I am dead serious. When you run your Haskell program and it goes to 1GB before grinding to a halt, the first thing you need to check is make sure you're not trying to do anything really crazy, like basic arithmetic.
I want to emphasize that I like math, and I like Haskell, and I wish it were all dandelions and body shots. But no one has said anything about this problem that made me feel that using Haskell for a serious project could possibly be classified as sane.
One more thing. Haskell has in fact been used for a serious project. Darcs.
Darcs is a version control system written entirely in Haskell. It's frankly brilliant. It was apparently founded by 16 aircraft engineers, or something. I thought it was so cool I switched from Perforce to darcs after using it only a few times.
The next day I did a merge and darcs hung. I swear, this was, like, the second thing I tried to do. Turns out this is such a typical problem there's a whole mailing list, darcs-conflicts. Certain kinds of conflict cause darcs to go into this explosive combinatorial search. This problem has existed for months, I dunno, maybe years. I bring up this anecdote because the bug really feels like a different kind of bug. A really annoying kind. The kind you would get if you thought of programming as math.
A crucial benefit of being pure: it enables automatic deforestation in the compiler.
@Robert/Anyone:
"...Any introductory text on lambda calculus will probably do..."
What are people using these days to fit this need? The only good intro. I have is an excellent but out of print one by Greg Michaelson (and there's an updated one in the works I hear).
Would appreciate other recommendations...
Post a Comment
<< Home