Sunday, September 24, 2006

Disciples of All Nations

One recurring theme on this blog is the need for Schemers to get out and preach a bit more about the virtues of their language.

We got another good example of this not happening on Thursday. Friedman followed up on Gilead's lecture on scope and took it a bit further - over my head in fact. He explained something about how dynamic scoping rules applied to a rather obscure version of the factorial function we were given in an assignment - reproduced here:


(define fact-5
(((lambda (!)
(lambda (n)
((! !) n)))
(lambda (!)
(lambda (n)
(if (zero? n)
1
(* n ((! !) (sub1 n)))))))
5))


And that actually does return 120 if you put it in your Scheme interpreter, which to me is nothing short of a miracle. But that's because I don't yet fully understand the Y combinator, on which this is based.

So what's the Y combinator? Well, you could look it up at the Wikipedia article linked above - but that requires a bit of head-work. A better tutorial (in Scheme!) is here - which shows you how we arrive at it. The overall point of it, though, is to allow for recursive functions in the Untyped Lambda Calculus. The reason this is cool (and magic, actually) is because the Lambda Calculus - since it doesn't have any named functions - doesn't have an obvious way to do recursion. The Y-Combinator is the necessary magic. It's an expression in the Lambda Calculus that makes a non-recursive function behave as though it were recursive. So naturally it's absolutely essential to proving that the Lambda Calculus is complete - that is, that it can do all the computations that a Turing Machine can do. More to the point, it also proves that we don't really need dynamic scope to get recursion.

I'll let anyone who's interested read the tutorial. It's one of those concepts that starts to make sense slowly. You absolutely can't understand it by working everything out. It's best to find someone or something to explain it to you conceptually. And then just let it sink in. The first time you go through it you'll probably feel like you "mostly" got it, but might not have confidence in being able to apply it. As you think about it over the course of a couple of days, it starts to make more concrete sense. Eventually you get to the point I'm at now, which is where you have an intuitive grasp of why it's absolutely essential, but you still wouldn't have a whole lot of confidence deriving anything with it in a formal proof.

Well, Friedman seemed to think that scope had something to do with this - which, of course, it does. Because what we're trying to do is define recursion without dynamic scope, and that would seem to be impossible (because we're not allowed to clobber variables, etc.). I'm not going to go into any detail about this until I've fully got my head around it (which should be soon, but not today) - and it anyway isn't really the point of this.

The point is that Friedman made an offhand comment when he came in the room about Object Oriented Programming, which he hates. He first asked the class if anyone thought dynamic scope was a good idea. And no one raised their hands, so he was satisfied that Gilead had done a good job spreading the Gospel. Then he said something like "Good, because dynamic scope is bad. It's almost as bad as Object Oriented Programming." Which he naturally did to get a rise - since that's an obvious stab at Java - the "language of the future." A lot of people chuckled nervously. And what else could they do? On the one hand, Friedman has just told them that Java (which most of them probably program in daily) is garbage, and they think they know for a fact that it's not. Java is what made programming clear and easy to them, after all - what enabled them to quickly get applications running without spending hours debugging, like they used to hafta do in C/C++. So this statement would seem to contradict everyday reality. But on the other hand, he's Friedman, and he's smarter than them, and they're simply too intimidated to challenge it. So you just give a nervous laugh instead and hope the barking dog goes away and finds something else to play with.

I meant to say something about this, but I didn't remember it until just now. I'm reading this paper on the Y Combinator (what else?), and it has this line:


fun Y f x = f (Y f) x

This definition is frequently given to students on undergraduate programming courses, and some even deal with the lambda-calculus version. There does not, however, appear to be any evidence that this is widely known to computing professionals.


And here, again, we have a nice demonstration of The Problem.

Friedman hates Object-Oriented Programming because he thinks it's an undisciplined way to violate lexical scoping rules. You store variables in an object, pass the object to a function, and suddenly scope goes out the window because, of course, the variables in the object really belong in the global namespace and not in this function. But of course, global variables are visible from within functions, right? Yes, true, but the problem here is that they can be modified through the object - and there is no reason at all why more than one function call couldn't have been made passing the same object as argument at the same point in the code. So you do indeed quickly lose track of which variables can be modified from where.

The counterargument, of course, is that OOP buys you something by keeping all these variables separate. That is, it's a complicated naming scheme. It captures some generalities about what you're doing in the code by allowing you to call similar variables by similar names without succumbing to your human tendency to mix and match them by accident. And of course Java builds all kinds of things on top of the code you write yourself to cut down on the potential for you to screw up and clobber something you didn't mean to. So in a practical sense, I don't buy Friedman's argument. Yes, he's right that OOP allows you to violate scope rules willy-nilly, but OOP (well, Java, anyway) also has a lot of stuff added in to keep you from hurting yourself. And whether Friedman likes it or not, the naming convention that it introduces (it doesn't encourage you to think of this as a naming convention - it wants you to think that something magic is going on where "associated data and operations are stored together," but that's bollocks and all it actually is is an enforced naming convention) is quite useful - for people. So in a practical sense, Friedman's wrong.

But in principle he's right - and the quoted line illustrates the problem nicely. The problem is that the Y Combinator is something that students just learn for the test and promptly forget once they're done with it. They go off to industry and Java does all their work for them, and this is bad for everyone because (a) they get paid less than they potentially could be worth with proper training and (b) software gets developed more slowly - because no one understands how to derive new concepts from old concepts, they just know how to throw old concepts at new problems, and this is because they have no abstract-level understanding of what they're doing when they're programming. (And in reality, not only does software get developed more slowly, but the software that does get developed in Java is also crappy slow.)

I read an interesting post that predicts that industry will eventually have to switch to purely functional programming languages. Of course it's mean to be partly tongue-in-cheek (check the footnote at the bottom if it doesn't seem so) - but there's a grain of seriousness to it too. Functional programming languages are far more efficient at the things that matter - transforming programs on a human-understandable level, programming at the level of stable concepts, deriving (as opposed to discovering) more efficient ways to do things, etc. True, they're not as implementation-efficient (C/C++ is still the king here - aptly described as portable assembly language.), but they more than make up for that in being human-efficient and programming-efficient (and anyway they are usually easy to compile, so the machine can make up for most of the lost efficiency in the end).

I do believe there is a niche for this stuff, and believe, in fact, that it is entirely possible that functional programming could take over. It doesn't seem very likely, though, and that's because, as Friedman repeatedly says, the message isn't getting out.

Well, that's the part I don't understand. Why isn't the mesage getting out? What's necessary to get it out? How do functional programmers get a foothold in industry?

I think one thing that will have to change is that the Object Oriented fad will need to die out. Friedman's right that you don't really need it. A properly-trained programmer who knows about things like the Y Combinator also knows how to deal with scope barriers in principled ways without having to resort to cheesy glorified naming conventions pretending to be programming evolution. But it seems to be taking industry a long time to figure that out.

I think in a lot of ways OOP is the Optimality Theory of programming. It organizes programs on a level that's actually probably higher than programming. Sort of bypasses everything that actually matters and states obvious truths as though they were profound insights. Organizes but doesn't explain, etc. But that's a rant for another day.

It's interesting to me that lots of people seem to think we're overdue for a langage change. C/C++ and Java - the giants of the 90s (and 80s, in C's case) - seem to have overstayed their welcomes. People are well ready to get rid of them. And so we're at a point where a functional language - like Haskell or Scheme - could start to take over. I'm not holding my breath (I largely buy the case for Ruby being the Next Big Thing, though somehow I think something will trip it up at the last minute. Ruby will definitely be the new Perl, though - since Perl is (thankfully) on its way out.), but it would be really nice if it happened.

It's just that - to beat the dead horse once again - I really don't think it's going to happen unless Friedman et al change their teaching style. They're going to have to start demonstrating that Scheme is useful beyond the classroom - and so far he hasn't provided any evidence. Rather than having us reinvent the wheel (by, e.g. , re-deriving addition, subtraction and multiplication for Church Numerals or trying to derive the Y Combinator on our own (!!!) ) every week, a better idea might be to just outright show us the cool stuff and spend more time having us apply it to, say, transforming Scheme programs into executable C code. That would well demonstrate that Scheme is a skill you can take to work with you, not just something cool to play with as a grad student.

(Linked from the above post about Ruby - here's a very sensible look at what features the programming language of the future will have. It doesn't translate to automatic success for Haskell (or Scheme!!!), but it does show that they're at least contenders, something no one would have believed even 3 years ago.)

0 Comments:

Post a Comment

<< Home