Saturday, September 30, 2006

Principles Come First

In Noah's post about Ludwig von Mises birthday, he links an interesting biography of Mises on the Mises Institute's website.

I know a little, but not much, about details of Ludwig von Mises' life. I have never read a full biography of him, though there are some available. What was always important for me were the points of his arguments.

This is an attitude that I apply to all thinkers, and I can remember exactly when and why I adopted it. It had to do with a newsgroup I subscribed to (in the early days of the mass internet in the 90s) called Digital Liberty. I don't know what became of that group, but it's possible that it morphed into something like this. The founding idea for the group was to develop a government-independent e-currency, to replace the US dollar and the US government's control over it. Unfortunately, most of the discussion on the board centered around whether or not Ayn Rand was worth reading. The founder of the group was a self-described "lifelong admirer," but a lot of others were concerned that she was a cult figure. I myself had concerns on that account.

I had just been introduced to Rand the year before by a girlfriend (I read Atlas Shrugged in virtually one sitting; it remains one of my favorite novels.) and had just finished reading a book on Objectivist philosophy prior to joining Digital Liberty. At that point in my life I was something of a literary snob, and although I privately enjoyed science fiction very much (and I still consider Atlas Shrugged to be science fiction), I was going through a phase where I felt I had to be "above" it. Rand's novels left a lot to be desired on several fronts - but mainly in terms of characters. It is telling that in particular John Galt makes hardly an appearance in the book of which he is supposed to be the hero, the book, indeed, that begins with the line "Who is John Galt?" That is because he simply isn't human. There is nothing for him to do but stand there and spout philosophy, and that's not the kind of person who can bear the weight of a novel.

Atlas Shrugged is one of my favorite novels, but it cannot be appreciated in the way that cannon English literature is appreciated. What I found, however, is that when I said this to other Ayn Rand admirers, they seemed to want to hold me at arm's length. It became rapidly apparent to me as it becomes rapidly apparent to everyone who dabbles in Rand that she was a cult figure. There is no room for criticism or discussion, which is unfortunate because I think there is a lot to be said about the woman, her life and her works.

So I was initially interested in this discussion. But then the founder of the group weighed in with an opinion that was eminently sensible. He said that his opinion regarding Rand was that "an idea is not responsible for the people who hold it." That's an apt phrase, and I've repeated it verbatim in arguments ever since. An idea is not responsible for the people who hold it. Indeed. He went on to say that Rand was a megalomaniac and a hypocrite, but that these facts were irrelevant to whether or not her ideas were sound.

Of course, I'd heard this opinion before, but this was the first time I'd heard it stated so clearly, and I really took notice. It seems to me that every rational person must adopt this. That, after all, is the basis of the ad hominem fallacy - that ideas should be judged on their own merits and not on the characteristics of their proponents.

In this spirit, I take issue with the following paragraph from the Mises biography linked above:

Mises came to a decision, which he pursued for the rest of his career in Austria, not to reveal such corruption on the part of his enemies, and to confine himself to rebutting fallacious doctrine without revealing their sources. But in taking this noble and self-abnegating position, by acting as if his opponents were all worthy men and objective scholars, it might be argued that Mises was legitimating them and granting them far higher stature in the public debate than they deserved. Perhaps, if the public had been informed of the corruption that almost always accompanies government intervention, the activities of the statists and inflationists might have been desanctified, and Mises's heroic and lifelong struggle against statism might have been more successful. In short, perhaps a one-two punch was needed: refuting the economic fallacies of Mises's statist enemies, and also showing the public their self-interested stake in government privilege.

I understand the thinking here, but I cannot agree with it. Quite the contrary, I think Mises was absolutely right to take the course he did, and that although it may not have benefitted him much in his lifetime, I think it is one of the things that has assured the endurance of his works beyond his death. What the article here advocates is essentially the wanton use of ad hominems. That might have been somewhat effective at elminimating corruption from the bank at the time (or it might have landed Mises in jail or murdered, who knows?), but I don't think it would have made Human Action the classic that we know it became.

Mises had a bigger vision. He knew that the enemy he was fighting was not these corrupt bean-counters specifically. That kind of thing happens in any economy. As long as there are laws, there will be criminals, and without laws, there would still be criminal behavior. Mises understood, as the author of this biography apparently does not, that effective argument procedes from principles. We advocate Capitalism because it is the only moral economic and political system we know. It is true that it is also highly effective, but that is a result of its moral rectitude and not an independent justification for it. If a fascist system could be shown to be even more effective than market capitalism at reducing corruption, it would still be wrong to adopt the fascist system.

And so Mises chose to attack his enemies' ideas - because this is the level that really matters. This is the lasting level, the foundational level. You do not build complex things like national economies and societies on a mass of recommended policy or personal criticism. Such a thing quickly turns into an unmanageable mess. What you do is start from principle, build your society on a firm foundation. We cannot anticipate what contingencies will arise, but we know that if our principles are straight we have at least a fighting chance of knowing what to do. If Mises had been remembered as the man who cleaned up the Austrian banking system, it's doubtful that people would take his ideas as seriously as they do. So I, for one, am glad that Mises chose to fight the battle where it truly matters. For whether or not these people were motivated to write what they did out of desire for personal gain, the ideas they wrote were still (superficially) persuasive to a generation of readers. Had Mises simply exposed them, perhaps their individual papers would not have been read, but others would have come along and constructed the same deceptive tripe. By attacking the ideas, Mises generates arguments that apply not to this particular situation alone, but to all possible such situations.

The more I read about Mises, the more I like him. I cannot, unfortunately, say the same for the staff of the website that that style themselves as his heirs. Some of them, I think, need to read a bit more carefully.

[Update - The author of the article in question was none other than Murray Rothbard, as it turns out, and not "the staff of the website that style themselves as [Mises'] heirs." No matter - I stand by my judgment, just deflect it to Rothbard. Mises was right not to expose the corruption; Rothbard is wrong to suggest he should have.]

Gay Lobby Bares its Ass

I only realized late last night that the premiere of season two of the new Doctor Who series was yesterday. Preferring to sleep, I assumed they would rerun it, but it turns out I just missed it. Ah well. Despite the obviously improved writing and special effects, I have mixed feelings about the new show anyway. On the plus side, I thought Christopher Eccleston was ace as Doctor Nine, possibly the best of all ten (though I'm also a fan of numbers Two and Four) - but of course he's gone now. In the minus column - the politically correct social commentary was more than a little bit irritating.

In a way, it's that that I wanted to talk about. Following Doctor Who-related links last night, I discovered that the actor who played Captain Jack - an "openly" (they never directly say so, but the hints are so strong they're clearly not hiding it) bisexual character - is himself gay. At the bottom of his Wiki page there's a link to an interview he did with AfterElton, in which he says the following:

JB: ... The whole thing about gay marriage. I particularly don't like to associate a civil partnership, which is what I call it, with a marriage. Because--and I have arguments with people in the gay community about this a lot--why do we want a word that is synonymous with a religious ideal [belonging to] a group of people that hates us? Why do we want to be part of that? Why do we want to have that word attached to us, why can't we create our own word?

And you know what else has happened, the conservatives have turned it into a political battle, so every time they want something done, they just say “Oh, gay people will be married, and marriage is a sacred thing”. Well then--excuse my French-- let's f*** it, let's just get rid of the word. Let us use ‘partnerships', it's still the same thing. The thing that we need the benefit from is--

AE: The legal rights.
JB: The legal side of it. And also the fact that, for people who don't want to accept homosexuality, it shoves it down their throat, to coin a phrase--sorry, I'm on my soapbox here-- it forces them to accept us, and to respect us.

Which sort of puts me in a bind, given things I've said about gay marriage. On the one hand, I agree with him about the need to first go through "civil unions" before outright calling it marriage. Not only that, but we agree on the reasoning: (1) there are cultural issues that haven't been settled and (2) calling it "marriage" gives conservatives an excuse to make a bigger issue out of it than it really needs to be, thus delaying the process anyway.

On the other hand, he then comes right out and says something that I've long suspected - nay known - to be part of the gay lobby's agenda, and that's that the campaign for gay marriage is really just a proxy for forcing the public to accept them. And to that I have a giant objection.

In fact, the real reason I'm against recognizing gay marriage is because I do not think the government should be in the business of recognizing anyone's marriages - gays, straights, polygamists, what have you. Marriage ought to be a private affair - between the consenting adults involved, their church if they have one, and their lawyers (for drawing up wills, medical releases, etc.). There's no need for the government to sanction this cultural institution because it can and will survive on its own, and maintaining cultural institutions is anyway outside of the government's job description.

One of the reasons I have this opinion is because I'm also a strong believer in what I sometimes think of as the "Give unto Caesar" principle of government - which is that all government should really do is set up a framework in which society can function. It exists to defend us from external threat, enforce the rule of law at home, and THAT IS ALL. Other institutions that people think they need - like religions or corporations or whatever else - they are free to form. These things should not be supported by the government. And the main reason they should not is because people should be left maximally free to work out their own affairs. Naturally life in society will involve prohibiting certain kinds of behavior (e.g. theft, murder) in the interest of protecting citizens' rights. But aside from these well-defined cases, people should be free to pursue their own goals in the ways they see fit.

Whatever I personally think about bigotry, I am not free to require that people abandon it. I can try to persuade them to (and I do, though since I focus largely on eliminating the bigotry I myself experience from minority groups, it isn't always perceived that way), but I cannot force it. And that's why I think this motive for pushing gay marriage on the public is an abomination. Our system of laws is not meant to enforce notions of which lifestyle choices are proper and which are not (but see footnote at end). Demanding government recognition of gay marriage is precisely that: asking the government to declare for all its approval of your lifestyle, whether or not the public in general agrees. This is a disgusting and intolerant tactic, and I think it should be resisted.

So while on the one hand I want to salute Barrowman for his support of civil unions over marriage for gays, on the other I think at least some of his motives here are a bit bigoted. As further illustration, he says this earlier in the interview:

AE: [laughing] It's OK. I was just asking, in your own experience, whether you had known men that were bisexual.
JB: Oh, right, yeah. I don't like to label. I think of myself as a man. I am a man, who, if you have to put me in a category, I am a man who likes men, I would be a gay man.

I do believe that there are people who can like men and women, because, although I wouldn't choose to sleep with a woman, I still find women attractive. And when I say ‘choose to sleep with a woman', it's not a choice that I have made [to be gay], it's just not in me to sleep with women. I wasn't created this way to sleep with women.

And with men who like both men and women, that's fine, but there's a lot of confusion goes on in that instance, because men sometimes do use bisexuality as an excuse not to admit to their families, their friends, and publicly that they really are gay.

In other words, despite the obligatory leading disclaimer, he doesn't really believe in bisexuals. Which is fine with me personally because I'm skeptical myself. But the point is surely that anyone who advocates civil unions as a way of forcing an unwilling public to accept his lifestyle should not be making judgements about the lifestyles of others. There is, after all, no shortage of people who believe that there is also no such thing as true homosexuality, that it is just a mental illness, or a spiritual corruption, or even just profound confusion attributable to weakness of character. Barrowman presumably rejects these views based on his own experience. So who is he to pontificate about the motivations of bisexuals?

I should think that someone like him would rather regard this as an empirical question. If homosexuality ever becomes completely acceptable, then we'll see whether the percentage of the population that describes itself as "bisexual" drops. But until then, all we can really do is speculate.

And this leads to my final bone to pick with Barrowman - and that's this repeated meme of an opinion about America.

So, I suppose when I look at what's going on in the US, and I just despair in terms of some of the attitudes and the laws there, you hope it won't take [forever to change]. Because sometimes I think: oh God, it's going to be like fifty years before they have gay marriage across the States...
JB: Well I think, I think you're right. It's the fear tactics that they have in America. That's what's causing it to be so backwards. Everything is based on fear: what you don't understand, you fear. Keep people fearful of things, and they will listen to you, they will follow.

And here:

AE: Yeah. It's totally bizarre. So, in terms of Doctor Who, it really was the case that there was no pressure from the BBC, and no restrictions from them. John Barrowman kissing Chris Eccleston was fine with them?
JB: Well that was there... that was in the script.

The BBC, remember, are a public company, they are not a privately owned company. In Britain, we pay a license fee in order to have television. And that fee goes to the BBC. Therefore, the BBC must produce programming for the wide majority of the public. So they have to include everybody. I mean, we have gay programming all the time on the BBC. And it's watched by a mainstream audience.

The BBC, like all the other channels in the UK, is supportive of gay and lesbian programming. [Gay and lesbian characters] get introduced in programmes...and also, to be honest with you, if [the BBC] weren't [supportive]--it's the law now. It's part of the European law. If our military fire someone out of the military because they're gay, they get sued, and the gay people win.

And here:

AE: Brokeback Mountain.
JB: Brokeback Mountain, classic example of it. That's why I say that I'd like to think that we are progressing, but to me, the US seems to be going backwards in that aspect as opposed to forwards.

For this last bit, I should make clear that they're talking about Brokeback Mountain couching all this in terms of bisexuality and not about the fact that it didn't win at the Oscars.

But on the whole, I get really fed up with this crap. America is no less tolerant of gays than Europe. The only difference between America and Europe is that we're actually allowing the nation as a whole to discuss the issue rather than simply imposing the opinion the gay lobby wants. That's a crucially important point - because "acceptance" is ultimately something that people have to decide on, not their governments. You can legalize gay marriage all you want, but if the general population resents it well, then they resent it. Surely what's on the ground is more important (to the discussion of acceptance, I mean. Obviously the property rights secured to deserving individuals by imposed gay marriage are a desireable outcome.).

The thing that's really galling about this, though, is that America isn't actually falling behind on this anyway. Gay marriage is, as far as I know, legal in only five countries: The Netherlands, Belgium, Spain, Canada, and the US (in Massachussetts). Civil unions have existed in Denmark, the Netherlands, Sweden and Norway since the (very) late 1980s. So all of the countries on that list except the Netherlands are actually "behind" in some sense on it. In Canada, same-sex marriage was finally put to a vote in Parliament last year, but it was originally imposed by court decree (in a case in New Brunswick). The text of the bill legalizing it makes reference to this decision, in fact. The only way to overrule the court would have been to use the notwithstanding clause, which is something of a taboo (and has, in fact, never been used). So there's a real sense in which same-sex marriage in Canada passed only under the gun. As for Spain, it only passed after great effort, as it was originally rejected by the Senate and had to be put to a re-vote.

It should be noted that civil unions appeared in the United States long before they showed up in the UK (Barrowman is a dual UK/US citizen) or Canada or Spain. It isn't just Vermont, by the way. Seven other states recognize civil partnerships, and that's not counting Massachussetts, where gay marriage is fully legal.

The Canadian public opposes gay marriage by roughly the same percentage that the US public does. And the list goes on and on. Despite this image of the US as a hugely intolerant nation on this issue, it's actualy legally near the forefront. True, there is a lot of publically vocal opposition in the US, but I don't think that's an indication of a cultural defect. Quite the contrary - it merely means that the US public is having an actual dialogue on the issue rather than being bullied into tacit acceptance by their media or their governments.

The bit about Brokeback Mountain is particularly annoying, though, because the character that Barrowman plays in Doctor Who, which he seems to think is some kind of evidence for great tolerance in the UK, fits precisely the description he gives of Brokeback Mountain. Captain Jack's sexuality is never directly mentioned. There's one scene where he kisses the Doctor, but it's played off as a joke from a happy-go-lucky character and is anyway just a quick peck on the mouth. There's certainly no sexual attraction implied. And as for heterosexual relationships, Captain Jack actively pursues some women (including Rose Tyler, one of the leads) over the course of his appearances on the series. Meanwhile, 4 years before this ambiguous (and anyway bisexual) character on Doctor Who, Buffy the Vampire Slayer introduced openly gay characters with no comment and managed to do better in the ratings. There's simply no case to be made here that US television and/or pop culture is any less gay-friendly than that of the UK.

As I've said, I do not personally have a problem with gay marriage as I believe that this is an issue for individuals and not governments. I fully support the US government removing all legalities pertaining to traditional marriages from the books and leaving everything up to individual contracts.

And that is why I get so impatient with people like Barrowman (whose opinions I take to represent the majority of opinion in the gay community). This is affirmative action all over again - we're just replacing one form of injustice with another. Thanks, but I prefer to actually learn from the mistakes of the past and try to get it right this time. Socking each other in the eye endlessly is not my kind of politics.

[Footnote - Before anyone drags out the tired objection - pedophilia is not covered by this because children are not old enough to make lifestyle decisions for themselves - i.e. cannot give consent. I realize that boundaries are arbitrary here, but most people agree on some time between the ages of 16 and 19, and whatever the agreed-on number in your locality is, it should be vigorously enforced. But it should be enforced as a protection of the child's rights and not as an prohibition of the lifestyle. To those who think there is no distinction - there is. Computer-generated pornographic images do not violate the rights of actual children and should, therefore, be legal. It is only the actual harming of children that the law has an interest in prohibiting. And yes, I draw a distinction between "prohibiting" a thing and "taking measures to prevent it."]

Friday, September 29, 2006

Alles gute zum Geburtstag

Noah tells me via email that today is Ludwig von Mises' birthday. And in fact, the link and title are both shamelessly stolen from the same email. (Let it never be said I don't cite my sources!)

The link goes into great detail about Mises contributions, so I will let it do most of my speaking for me. For my part, I'll add merely that the world (the entire world) owes this man a great debt for keeping Capitalism on the table in political discussions. The debate between Capitalism and Socialism is not academic. It is, quite literally, the struggle between success and failure - between a human system of wealth, rights, and individual dignity, where there is plenty for all, and a neo-tribalist system of misery, where equality is purchased at the cost of ever-growing poverty.

Socialism, Mises demonstrated, in his greatest original contribution to economic thought, not only abolishes the incentive of profit and loss and the freedom of competition along with private ownership of the means of production, but makes economic calculation, economic co­ordination, and economc planning impossible, and therefore results in chaos. For socialism means the abolition of the price system and the intellectual division of labor; it means the concentration and centralization of all decision-making in the hands of one agency: the Central Planning Board, or the Supreme Dictator.

Indeed. Socialism destroys crucial economic information. In a free system, prices really do measure worth. They are the output of a massive amount of small negotiations - of businesses trying to make money and consumers trying to save it - of businesses trying to offer goods at prices consumers are willing to pay, and consumers trying to obtain what they need and want. The negotiated price tells a business how much demand there is for a thing, and therefore how much to supply. The negotiated price tells the consumer how valuable a thing is, and how much value he will need to produce to justify his ownership of it. Having destroyed this crucial information, Socialism then expects a handful of individuals to somehow magically plan the economy anyway - as though the task were even possible before the tampering.

It's astoundingly stupid. The whole Socialist project - from the amateurish philosophy to the miserable implementation - is astoundingly stupid. And yet for reasons I will never grasp, it continues to find followers after over a century of nothing but failure.

Von Mises did us all a favor by stating these flaws clearly and providing the intellectual basis for something better. Happy birthday!!!

Pleased with Opera

Last week I jettisoned Firefox in favor of Opera. This didn't really have anything to do with Opera per se. I was mostly just getting tired of using two different browsers all the time. See, I have a Mac laptop and Linux (Fedora Core 5) desktop - so it was Camino on the Mac and Firefox on the desktop. Oh, yeah, and Firefox sucks.

So I gave Opera a try, and after my first week as an Opera user I'm so impressed I thought I'd take some time to sing its praises.

I mean - WOW! It's amazing! My main bias here, just to lay my cards on the table, is shortcut keys. I'm definitely not a drag-n-click type - much prefer to stick with the keyboad. I'm also a Vim man, which tells anyone in the know a lot about HOW I like my shortcut keys to function: namely none of this irritating Emacs crap where you have to constantly mash the CTRL key to get anything done. One touch - that's the Vim way. And it's also the Opera way. Rather than ALT-TAB for sifting through tabs, it's just 1 and 2. You can use SHIFT-arrow to gain control over navigating to links, but a simple q or a will get you there in proper order too. Best of all, though, is the fact that each tab caches old visits, and there are control keys for this too! Z takes you back and x takes you forward. VERY nice to use. (And really, when you think about it, an obvious design choice. For the most part on the web you're reading not writing - so it's OK not to have to switch from "edit" mode.)

Other cool features include the fact that it reloads all open tabs after it shuts down. So if there's a page you need for a couple of days but know you won't need in the future, there's no reason to bookmark manage it - you can just leave it open in a tab. Better still, Opera remembers pages you went to in previous branches, so if you're on a cool page and somehow get out of that link chain, there's a "trash" icon in the upper right corner that remembers this for you.

On the lighter side, people write cool widgets for it that you can download. I have, for example, a Chess and Sudoku game I grabbed from these.

But I think the thing that really makes it is the look and feel. Firefox's displays are almost amateurish, and anyway it handles layout tags in unintuitive ways. Opera's display is always consistent and ... well, nice.

Of course, all these product endorsements are subjective. One of the guys who manages a blog I often visit was talking about how he LIKES the fact that IE doesn't have tabs! (!!!) I can't imagine, personally (and it was pointed out that the newest version of IE does support tabbed browsing now anyway), but to each his own.

Still, if you're not satisfied with your current browser and haven't had the pleasure of using Opera, I highly recommend giving it a try.

Thursday, September 28, 2006

When the Bough Breaks...

Yesterday in Philosophical Foundations of Cognitive Science we talked some about Andy Clark's idea that the mind extends beyond the individual and into his surroundings. It goes something like this: we are accustomed to using tools to complete tasks - even mental tasks. When we have some complicated math to do, we tend to reach for pencil and paper. Clark argues that the pencil and paper are part and parcel to the "mind," part of the machinery that gets the calculation done. Thus, the distinction between the individual and his surroundings is blurred.

I don't necessarily disagree with this. I have a lot of sympathy for the Functionalist notion that "mind" isn't limited to instantiation in neurons. So in a meaningful sense I'm committed to the idea that minds can be made out of pencil and paper too. That said, I'm not an uncritical fan of Clark's argument here, and it's worth going through why not.

First, I dislike his use of the words "scaffolding" and "props" to steer readers away from other words with less convenient associations, like "tools." There can be little doubt that pencil and pad are tools. They are external objects used to aid in a specific task for which they are suited and then discarded. By any meaningful definition of "tool," they qualify. However, this word has other associations he doesn't like. For example, a "tool" implies a user distinct from the tool. Whereas we think of "props" as being part of the whole of the play, and whereas scaffolding blends nicely into the construction it aids, tools are generally under the control of an agent, definitely not "part" of the whole. Scaffolding and props are useful simply by being present; a tool must be actively used. And the fact of its being "used" implies a hierarchical relationship between the agent and the tool.

Now, Clark would no doubt respond that he's just being precise. Sure, pen and paper are tools, granted - but they're a tool of a different kind because of the relationship they bear to the nominal "user." His choice of words is meant to draw attention to this difference. And that's fair enough as far as it goes. It's just that I am not really convinced that they are "tools of a different kind." To illustrate, let's consider a screwdriver, a prototypical "tool" if ever there was one. A scredriver is meant to aid a person in turning a screw. And, in fact, in many (most?) cases it it indispensible to the task. A person can turn a screw without a screwdriver, of course, but usually not with a very high degree of success. And yet, it would strike us as unusual to think of a screwdriver as an extension of an arm. But why? Surely there is no bigger leap being made here than saying a pencil and paper are part of the mind? More to the point, the screwdriver is used in exactly the same sense that a pad and pencil are. It aids the user in a specific task to which it is suited and is then discarded. The relevant features seem to be the same. We seem to need to stretch the words "prop" and "scaffolding" a bit to get the desired effect, but "tool" simply applies. Now, of course analogies are useful for pumping people's intuitions, so there's nothing necessarily inappropriate about comandeering these words to get a point across. But one gets the feeling that the argument rests on the words rather than simply using them to challenge conventional thinking and make a point.

Which leads to the second objection - namely that I think there's a sense in which Clark isn't appreciating what his analogy is really useful for - much the same way that I believe Searle doesn't fully understand the implications of his own Chinese Room thought experiment.

Why can Clark "get away" with calling a pencil and paper "scaffolding" when we wouldn't let him do the same with a screwdriver? Well, he wouldn't be able to "get away with it" for Searle. Searle would simply reply that a pencil is not made of neurons, ergo it isn't any part of a mind - it's a tool, end of story. Which is interesting because it means that the only people who are susceptible to Clark's analogy are people already sympathetic to Functionalism. You have to already be committed to the idea of mind as substance-independent system to think this analogy is worth talking about at all. (To be fair, it's possible for people in Clark's camp to buy that minds have to be made primarily out of neurons - i.e. I'm stretching the rules a bit by saying that only those sympathetic with Functionalism will bite. In fact, someone could believe that mind extends beyond the brain, but that the brain part must necessarily be made of neurons, that there are appropriate building materials to appropriate modules, etc. A mind is a neuronal processor plus some extensions, or whatever. I personally think this opinion would be difficult to maintain (because writing instruments are clearly the sorts of things that can be instantiated in many materials - stone tablets, keyboards and screes, quill and parchment, etc.), but it wouldn't be impossible to do so.) And yet, there's something in us that rebels against it all the same.

What I'm getting at is this: Clark's analogy is useful more for pumping the intuitions of Functionalists (and similar) about their beliefs than it is for actually introducing us to a novel way of thinking about the world.

As to that last bit, I really don't think that Clark's "observation" that minds extend beyond the individual and into the environment is all that helpful. Sure, it gives perspective, but there's something artificial about it. Namely, we can connect anything in the universe to anything else in some finite number of analogy applications. So right, granted, the boundaries that we draw are ultimately arbitrary in some sense. A screwdriver can be thought of as an extension of an arm if we push it, a pad can be thought of as a temporary memory extension, etc. But take this too far and you lose the concept you started with.

As an example, consider the distinctions that Environmentalists like to draw between the "natural" and the "artificial." Although beavers build dams and this is "natural," when a human builds a glass house it's "artificial." Why? Well, there is no clear answer, and these boundaries will be different for different people. Pretty much everyone agrees that tools made of sticks and stones are "natural." Once we add metal, though, intuitions start to diverge. For some there's no problem, as long as there's no large-scale use of it. Swords are fine, highrises not so much. For others, highrises are even fine: that's just the product of an animal instinct for shelter, etc. For still others, even cars and space shuttles might be "natural" in the sense that it is in human nature to make and use tools, etc. But once we've reached this point we get the feeling that the distinction has ceased to exist. If even space shuttles are "natural," then it seems people are willing to accept anything made of atoms according to the laws of physics as "natural." Which covers everything, and the concept ceases to make any distinctions.

There's an element of that going on in Clark's analogy, I think. If we accept anything that makes an impression on a mind as a part of that mind, we start to run out of use for the concept "outside world." To illustrate this intuition a bit - let's say that Clark had chosen a calculator instead of a pad and pencil. Well, one gets the feeling this wouldn't work as well. With a calculator, the feeling is more that we've farmed out the thinking to something else. Of course, a calculator is still a tool, but its functioning is largely hidden. We hit the buttons in the prescribed way and an answer comes back, but we don't really know how. There isn't an obvious link between the buttons and the answer - save that it's doing the math we want. Hitting the buttons is more like communicating with the device, which then communicates back through its output screen. And yet, I could easily describe a calculator as either a "tool" or a "prop." (If I stretch it I might even get "scaffolding" in the sense that it helps me get access to the parts of my mind I need fast access to, or something.) Well, this gets even worse if you're like me and your computer is always on. When I need some quick calculating done, I usually just flick on the screen and open GNUPlot or bc. It's even harder to stretch the analogy here: computers are getting ever closer to being "other minds." And of course we reach the fixed point when we start to think of other people as extensions of our minds. If I need to know a fact about geology, I could look it up, but if there's a geologist in the room with me I'll probably just as him instead. Is he part of my mind? We are all together coo-coo-ca-choo?

What I'm illustrating, of course, is that like the "natural/artificial" distinction, the boundary between "my mind" and "outside world" can be played with to an arbitrary degree - the only reason we don't is because if we take it too far we obliterate the distinction. The world has sort of settled on a nice intuitive definition of the boundary: what I perceive as "internal" is - and mind is all internal. Pen and paper are tools because they are outside me. They aid my mind rather than being part of it. There is, as I said, no principled reason (for a Functionalist, anyway) not to adopt Clark's concept of extended mind. The boundary is arbitrary. But because it's arbitrary we have to be careful not to take things too far - in order to preserve the concept (provided we still think it's useful - which we do at present). Things which require close interaction between mind and object (like an abacus or pencil and paper) do better, of course, than things like calculators which simply return an answer without much interaction. But coming up with a principled distinction seems hard, and if Clark can stretch "mind" to pencil and paper, it's only a matter of time before someone else wants to add in the calculator, the computer, or even (gasp!) the other person.

Since there's no particular reason in principle to object to Clark's monkeying with the common concept of mind, it seems fair to ask whether there's also a reason he's doing it? Monkeying with the concept, after all, sort of disturbs other dependent mental concepts. It creates side effects. It's like running the installer to upgrade a piece of software only to be told that the new version is no longer compatible with some other dependencies. So you can't just grab Qt4, you've got to grab a bunch of other seemingly unrelated stuff too - like a new version of X11 or whatever. What you thought would be a 2min. installation ends up taking the better part of 20min.

And that's definitely going on here. Changing our concept of mind to extend beyond the individual body makes us nervous because it means we might have to "update" some of our other concepts as well: like the concept of individuality, or the internal/external distinction, or perhaps consciousness, etc. The default position would seem to be to leave well alone!

So the relevant question is whether Clark's idea buys us anything?

Not that I can think of. In class there was some talk about some guy Hutchins who consults for airplane manufacturers who believes that it's more useful to think of the cockpit - pilot and all! - as a single system. One assumption made in class was that if he can get manufacturers to pay him for this, it must indeed be a useful concept. But of course that doesn't follow at all. It might just be a catchy sales point; he may, in fact, have nothing particularly special to offer that another consultant couldn't provide. In fact, it's completely counterintuitive to think of a cockpit as a single functioning unit insofar as there are obvious and apparently relevant differences between the machinery and the pilots (pilots don't "break down" or have backup systems, they don't behave in a deterministic fashion, they have intuitions and stored memories of flying history which the instruments don't, etc.). Thinking of pilots as cogs in a system seems like just a way of talking. It's difficult to imagine what this actually buys you in terms of insights.

And so it is with Andy Clark's analogy too, I think. It's not that we can't stretch our concept of mind to acconomdate it, it's just that there doesn't seem to be any particular motivation to do so. If we grant (which any Functionalist surely must) that there is no easy physical determinate of the boundary of mind, that this inner/outer distinction is in some sense arbitrary, then shouldn't we just go with the general intuition - leave the language untouched? People across many cultures have managed to agree on this boundary. Further, they all have words to accomodate the concept Clark is pushing: namely "tool." They are all capable metaphorically of speaking of tools as extensions. So there doesn't seem to be any real point or insight to what amounts to advocating changing the definition of "mind." The world's languages are already rich enough to deal with any situation that would seem to require us to bend the default concept. Why, therefore, do it in general?

I can't think of a reason. But I can think that it makes me nervous to override a general and apparently stable intuition (everyone seems to "just know" what is internal to them and what's not) just to make a minor cute point.

So to get back to the main idea: the real benefit to reading Clark seems to be for its use as a nice thought experiment. It's sort of a point in Ned Block's column. He's immune to it, of course, since minds are neurons for him. But the rest of us are compelled to answer it. We have to admit that there's a kind of "leak" here - that people can push us to accept fairly counterintuitive boundaries for minds. And in that sense, Clark has done something very useful. But is it an insight? I'm not convinced.

Real Multiculturalism for a Change

Via Samizdata - Australia's "multicultural spokesman" Andrew Robb told muslim leaders that they need to do more to publicly condemn terrorism.

Whatever one thinks about the fairness of making Muslim leaders responsible for condemning terrorist acts, this is welcome news for the simple reason that Mr. Robb is the government's multicultural spokesman. This has got to be the first time in history that any multicultural officer actually assigned some blame and/or responsibility of any kind to a minority group. Generally speaking, such a person's duties include: making apologies for minority groups and finding new ways to blame the (designated) majority for their problems, STOP. I'm glad to see that there is at least one person in the world holding this ridiculous office that has some concept of a two-way street in minority-majority relations. Good on him! (Now find me one in America...)

Wednesday, September 27, 2006

The Great Communicator

Preparing for tomorrow's discussion section in the class I TA I had occasion to read over some of Ronald Reagan's speeches online.

We're talking about analogy and innuendo and how politicians use them (YAAAWWWWNNN), so I went to to scout around a bit for source material. I ended up settling on using Rumsfeld's Old Europe statment and Nixon's Great Silent Majority speech for innuendo and Michael Crichton's speech on Environmentalism as Religion for analogy. We'll see how it goes. Tristan had the better idea of having the students bring their own material to discuss for his sections. I would have done that, but I was too lazy to send out the email. Which just goes to show that laziness and diligence are really the same thing with different polarity... (Tristan has what Larry Wall would call the Virtue of Laziness - whereas I suffer from "false laziness.")

In any case, while digging around for all this I happened to read President Reagan's First Inaugural Address, and it reminded my why I think of him as the greatest 20th century American president.

Politics, we're taught, is slimy and duplicitous. You're not to trust what politicians say because they are manipulative and self-serving. And I have seen little in my lifetime to dispute this view. Indeed, one of the main reasons I am a classical liberal (Libertarian) is because I think we could all do with a lot less government going on. What I've never understood is why the very people who claim to be the most cynical about the American system of government are so eager to give that same government ever more control over their lives.

The interesting thing about political theater, I think, is that people do take it seriously, despite all the warnings they constantly give themselves. What politicians say matters, and I can't help but think that a lot of people were listening when Ronald Reagan said, on 20 January 1981 (as the 5-year-old me watched on TV!):

In this present crisis, government is not the solution to our problem; government is the problem. From time to time we’ve been tempted to believe that society has become too complex to be managed by self-rule, that government by an elite group is superior to government for, by, and of the people. But if no one among us is capable of governing himself, then who among us has the capacity to govern someone else?

Straight to the point, and beautifully stated. The Great Communicator indeed.

After the same class today, the professor told me that things have changed a lot since he was a student. He said that when he was in school (in the UK), you "had to be a Socialist or else you weren't cool." Things have indeed changed. Classical Liberalism is back on the table - in discourse at least, if not in actual policy. I can't help but think that Reagan has a lot to do with that.

The present administration likes to think of itself as his heir, but it is nothing of the kind. We haven't seen this level of gratuitous spending since the Great Society. There is absolutely no point to voting Republican anymore, and the Libertarians, my first-choice party, have yet to make a real effort. What can I do but be a bit nostalgic?

In the 80s the clouds parted a bit - at least rhetorically. And there was a time in the early 90s when it looked like Congress was going to continue the tradition. I'm can't say I'm surprised they slipped back into their old habits - but I will say this: at least we have a precedent in living memory. It's good the old man was around.

Tuesday, September 26, 2006

Where Property Rights Come From

Noah has an interesting post nitpicking a bit with some arguments presented in Timothy Sandefur's CORNERSTONE OF LIBERTY: Property Rights in 21st Century America.

Noah's complaint is that Sandefur falls victim to the naturalistic fallacy in his attempts to justify property rights. The idea is that Sandefur assumes that because private property is sought out among humans universally that it must be a desireable end. But of course, it is insufficient to say that just because something is natural it is also good. (Rape would seem to be a good counterexample - and indeed, given the line of argument here, theft as well.)

I have not read Sandefur's book (and have no immediate plans to do so), so I cannot say for certaint what is within. I will say, however, that this is a common Libertarian justification for property rights, and that I do not believe it necessarily falls victim to the naturalistic fallacy. Rather, it merely states that things which are natural should only be tampered with under justification. All other things being equal, it is best not to tamper with nature, and that is because nature is a complete and functional system, one that is complex and therefore subject to side effects/unintended consequences as a result of tampering. Tampering with things that are in our nature always brings with it a cost, whether in terms of effort or in terms of the restriction and frustration of other natural desires, or, indeed, in terms of completely unforeseen side effects. Thus, the burden of proof is rather on the person who wants to subvert nature and not on the person who wants to preserve it.

For example, cities are not (strictly-speaking, I mean - beavers build dams, yes, yes, I KNOW!) "natural." However, they bring with them enormous benefits. So we're willing to put up the effort to construct them. Since we are humans, it is at least intuitive (and probably proveable) that gains we measure will be in terms of our natural desires. There is no sense in talking about cosmic gains if these are contrary to our design specifications. And so indeed, any justification for some modification to nature will have to be given in terms of preferential weighting for things that are also natural. In the case of the city, of course, we get commerce (an aggregate increase in our ability to meet our material needs) and common defense against dangers human and animal: gains to our natural desires for material goods and security/longevity.

Now, I myself do not completely buy the naturalistic justification for property rights becuse I believe property rights are themselves a modification to nature. It may be true that everyone has a desire to retain his property, but it is equally true that theft brings personal gain. If someone goes to the trouble to plant a field and grow crops, and I'm bigger than him and equally intelligent, then of course it is in my (animal) interest to let him do the work and then steal the fruits of his labors for myself. This is a more efficient way to feed myself than doing the work myself (though of course one could argue that by doing the work myself I guarantee success to a degree that would not exist relying solely on someone else). To create property rights, however, we have to forbid this kind of behavior. The idea is that there is an aggregate gain to be had from respecting everyone else's rights. True, in the short term it works out better for me to steal the weaker man's food - but in the long term it doesn't at all (because I am myself not safe from such theft, and because theft tends to lower productivity - who works when he knows someone will just come and take what he made?). There is a general gain to be had by forming a community and respecting rights. And of course, in an idealized sense no one will agree to join a community if he is not guaranteed that his concerns (in this case, his concern that the fruits of his labor belong to him) will be met.

(I suppose in a realistic sense people could be coerced into joining by being made "offers they cannot refuse" or what have you - and indeed history proceded more or less that way, as we know. It is the idealized sense we're interested in, however, because it is that toward which the system tends as well as that which is ultimately in everyone's personal best interest.)

Now, I should say again that I haven't read Sandefur's book, and that probably Noah is right that his argument falls victim to the naturalistic fallacy. I just wanted to say that that this line of argument does not in general tend to that fallacy, seen from the perspective I've outlined here - wherein nature is taken as a default state for the violation of which justifications must, or at least should, be given. I do indeed think political opinions which rely on what is natural and really present in human experience are a priori more likely to be just than those that do not, modulo, of course, what they take to be natural as well as their level of wilingness to modify or violate nature in order to attain such gains as might be accomplished by doing so.

Are Connectionist Networks Symbol Systems?

Yesterday in Philosophical Foundations of Cognitive Science the topic was Connectionist Networks. The discussion, rightly I think, focused on whether or not these are a different kind of model for Cognition than the Newell and Simon Physical Symbol System Hypothesis.

The case that they are fundamentally different is superficially compelling. After all, the biggest problem with Connectionism is semantic transparency. For a complex enough network, it's exceedingly difficult to say how it gets at the answers it provides, or what sort of generalizations it's capturing (if any - indeed, another pitfall is that if the network is large enough it may simply be storing patterns and not generalizing at all!). And the reason this is so is because the calculation is distributed over a mass of nodes with identical designs (though different internal values). One way to look at it is to say that it's a hugely parallel computer with lots of incredibly simple (but still separate!) processors linked in particular patterns. Because the design is uniform throughout, it's not so easy to pinpoint the source of a particular "decision" that affects the final output.

But is it really doing anything different?

I think this is a fascinating question, and I am seriously thinking about writing my final paper in the course on this topic. My answer is that they are indeed (not-so-)glorified Physical Symbol Systems (or, rather, implementations of same), but I'm at a point where I'm willing to be convinced otherwise.

The argument that they are really just implementations of Physical Symbol Systems goes something like this. Connectionist networks are cool because they learn. (Learning is a much bigger problem for systems implemented on the purely symbolic level since in some since setting up a symbol system commits you to deciding details that should really be left to the environment.) By slight modifications to their internal specifications over a (large) series of examples, they come to approximate the regularity underlying the examples. Provided one exercises restraint in the number of nodes on the hidden layer, and provided there is, in fact, something to be learned, the network should, given enough exposure, come to capture any generalizations that can be made over the examples. This is indeed a Very Good Thing.

The problem with saying that it's anything different, however, hinges on these words "underlying regularity." The assumption is that the network is learning a function, ultimately - a regular pairing of inputs with outputs. Since we (hopefully) constrain it to keep it from simply storing the inputs and associated outputs (by not giving it enough internal space to do so), it will have to instead store some kind of "observations" (used VERY loosely here) about the inputs that allow it to classify them into categories and make decisions on that basis. Since it is indeed a "function" (pairing of inputs and outputs) that it's storing, and since it is generalizing this function, it would seem to be possible to model whatever it comes up with with a straight-ahead mathematical representation. And indeed, the internal structure of a connectionist network is really nothing other than a series of if...then...else statements from a certain point of view (or, if preferred, logical primitives like and and or). After all, networks are all about thresholds. If a certain threshold is exceeded do this, else do this. It's a cascade of logic gates like any other computer.

So the short answer, for me, is that yes, ok, there are some differences. The network is mutable on a (much!) finer level than symbol systems. It has a clear environment-based learning algorithm, which symbol systems don't necessarily. These things are true. But ultimately, what's going on inside would seem to be a symbolic function like any other, and this is just a particular implementation. More to the point, it offers no counterevidence to Newell and Simon's hypothesis that thinking per se is a symbolic process.

I don't, however, think that should dampen cognitive scientists' interest in connectionist networks.

Networks are clearly useful tools. If we assume they are modeling functions (and I see no reason not to), it is still the case that they can be used to model functions that humans can't directly see. I won't link any here, but there are several documented examples of networks figuring out regularities that humans couldn't grasp - becuase the regularities in question were so complex as to be non-obvious, or not even obviously derivable. So thought of as "regularity detectors" they are very useful things indeed.

I have said before that science has to be more than simply representing - in the sense of re-presenting - a set of data. Science is data compression. It induces models from masses of data that allow us to make predictions. So it would, in any case, be pointless to assert that Connectionist Networks were some kind of new model of cognition that had to be taken as such. Even if this were true, it would be a useless fact if we had to take series of connections as being the final answer on anything; a series of connections and weights is not a suitable answer to the kinds of questions that humans ask! However, I do not believe that that is the alpha and omega of networks. What they are, as I said above, is generalizers - regularity detectors. If a network can learn something, that (presumably) means there is something "out there" in reality to be learned. And so I think interesting research can and should be done (and is being done) in the area of figuring out how to read the functions networks find off of the connections. Obviously the behaviorist approach of noting which inputs produce which outputs is inadequate since that is usually known beforehand. The nice thing about networks is that these are "brains" (rather, very crude computational models of brains) that we can cut open and look inside - down to the squirting juices. Developing systematic ways to figure out what's going on seems a very useful thing to do indeed!

That said, there will be times when the function learned is simply too complicated. And here's the fuzzy boundary that those who think this is a new kind of computation altogether can exploit. It may be (in fact, it probably is the case) that there are functions in the world that cannot be easily represented in human-understandable terms. Though these functions are presumably also symbolic and can be represented as mathematical computations just like any other, there may be so many symbols involved - it may be of such fine complexity - that listing all the rules is pointless for humans because it is over the attention threshold. In this case, there might be some meaningful distinction to be drawn between "symbolic" and "sub-symbolic" functions - though I myself would consider this an abuse of term (in the sense that "human-understandable" and "of super-human complexity" would be more accurate terms). For such functions, networks are a compact way of instantiating them and nothing more. They do not buy us anything in terms of understanding genralizations if the generalizations are, by their nature, things we can't understand. And it's this that leaves the question of whether connectionist networks really are just implementations of symbol systems open for me - technically.

Scheme Suicide Note?

Friedman gave me a bit of a shock today when he hinted that the new version of Scheme might be dropping set-car! and set-cdr! from the language. For those of you who don't know Scheme (but should!) - these are ways of modifying the values of variables. Truly Lexically Scoped languages do not allow this. Scheme is only mostly lexically scoped, which means it encourages you not to modify variables, but gives you the ability to do so if you must.

set-car! and set-cdr! provide this function for lists. Most of Scheme code is based on lists. The car of a list is the first item in the list, and the cdr is everything else in the list in a list of its own. So for the list '(1 2 3 4), 1 is the car and '(2 3 4) is the cdr. So these two keywords allow you to destructively change the values of lists (i.e. rather than recopying the list to get the new one you want you can just change the value of the old one).

So why would it bother me that these keywords were coming out? Well, as I've said before more than once, I believe that programming languages should leave the programmer as much control as possible. I don't have any problem with them encouraging any particular style of programming, but I want the ability to do things that are inconsistent with that style if that seems necessary. A programming language should leave the programmer free. I hate languages like Java that make most of the design decisions for you and leave you no way around its choices! Just like in human languages, in programming languages you want a language that lets you say whatever you want to say, not just what it approves of you saying.

So I am sorry to see set-car! and set-cdr! go. However, it's not the shock that I thought it was at first. When Friedman said that today, I got worried that perhaps they weren't going to allow variable clobbering at all, which would be a monumentally stupid design decision. And in fact, I was prepared to give up Scheme and even did a bit of reading up on ML as a precaution. But then I reasoned that it simply couldn't be that they were getting rid of all variable clobbering, so I had a look at the draft for the new language (linked above) and am pleased to say that, although set-car! and set-cdr! are indeed not in the report, set! is still there, so it's trivial to add set-car! and set-cdr! back in if you really need them.

That doesn't make taking them out OK, though. I don't really see the point. Again, in general programming languages should give the programmer as much power as possible. Granted, there's no particular motivating reason to give anyone set-car! and set-cdr!, necessarily. If you want to discourage programming with side effects, then just giving set! and not making a big deal about it is indeed the best way to go (of course, set! needs to stay because, as I said, it should at least be possible to add side-effects to your program). But it seems to me that since set-car! and set-cdr! are already with us, we might as well keep them around, if for no other reason than not to needlessly break any old code. Taking them out after we've already gotten used to them is a bit too heavy-handed/Javaesque for my taste.

But whatever. As long as they leave me set!, we're more or less good. I'll keep using Scheme.

I'm pleased to note, by the way, that contrary to what I thought, ML does allow variable clobbering. There's a builtin ref keyword that turns a variable into a reference, so if you really need to change one you can. So that removes my biggest objection to it. Of course, I'm not planning to switch from Scheme anytime soon (I do most of my coding in Scheme or C++. I'm a recovering Python addict and try to stay away from that language because the ease of programming in it makes it difficult for me to then switch back to the harder stuff!). But I thought it worth giving Haskell, ML and Ruby each a closer look.

Of the three, I'm most likely to be impressed by ML, I think. Haskell is interesting because of Monads and call-by-need semantics (Wikipedia article here). These are things I should get more comfortable with (and actually, I have no idea of Monads, so this is new territory), and so learning Haskell to practice them (and maybe get addicted to Haskell in the meantime? Who knows?) seems like a good idea. Ruby is just sort of fun. I've dabbled before in the language that dare not speak its name (OK, that's actually Oz. What a mess!), and it's lots of fun. But the insistence on absolutely bloody everything being an object gets really old. So yeah, sure, although I know that Ruby has yield and closures and continuations and monads all that (all of which are Very Cool Things that more languages should have), the slavish devotion to the Object-Oriented paradigm will probably turn me off of it before too long. But we'll see. There are objective viewers who say it's not just a fad, and I myself have said that I wouldn't exactly be surprised to see it as at least the new Perl, maybe even the Next Big Thing. We'll see. Can't hurt to learn it. But yeah, of these three, ML seems closest to Scheme - so it's an uphill battle for the other two.

But back to Scheme - I'm glad to know that set! is still with us, but I wish they would leave me set-car! and set-cdr! too. All I can really say about this is that we seem to be entering a decade where enforcing the paradigm of one's choice will be trend in PL design. More's the pity. Of course, this virtually guarantees that C++ will survive...

Monday, September 25, 2006

The Programming Language of the Future

The other day I linked this interesting article in an otherwise typical rant about how Schemers need to make more of an ecumenical appeal. It's by Paul Graham, and it's speculation about the Next Big Thing in programming languages. There's lots of chatter about this on the web. Maybe because programming languages are sprouting like weeds these days (because of the explosion in internet usage, no doubt), people are starting to wonder if C/C++ and Java's days aren't numbered. Naturally there's a lot of curiosity about what could come next.

Graham's article is by far the most sensible commentary I've heard on this. As anyone who programs will know, discussions about the comparative merits of various programming languages are more about religion than substance. People like what they like, end of story. This is one of the rare moments (which sort of makes me wonder whether Graham isn't a C programmer? For some reason, C programmers have always struck me as less "religious" than the others - which is ironic considering I once compared them to Republicans) where we get to step back and take a real look. But that's neither here nor there. fI wanted to respond to some of his points.

  1. Programming languages are for people - Now this seems like rather an obvious thing to say, but it's amazing how often it gets forgotten. Programming languages are for people - right. Actually talking to the machine is the compiler's job. If we were comfortable flipping bit switches, we wouldn't have ever needed PLs. But since we do need them, we should remember why we need them. The compiler can translate what we say into machine-speak, we need them to tell the compiler what to do. I.e. - the language should neither be totally geared toward the idea that there is a machine, nor totally geared toward the idea that programs are proofs. But here's where Graham really got my attention:

    And when I say languages have to be designed to suit human weaknesses, I don't mean that languages have to be designed for bad programmers. In fact I think you ought to design for the best programmers, but even the best programmers have limitations.

    Thank you, thank you, THANK YOU! This idea of what to orient programming languages toward - it's like walking on a tightrope. On the one hand, we use machines, but on the other we're people. Those of the school that say that languages should be designed for people really do have an annoying tendency to pander to the lowest common denominator. But that's never the point. Tools (well, good toold, anyway) are not generally designed for any old moron. They're designed for performance - and yes, sometimes that means you have to get a bit of training. Well, no difference here. When we say we're designing for programmers, we don't mean for any old schmuck who wants to write a one-off web application. The bulk of programming is done by professionals, and their software is only useful to anyone else if they can work their magic. This is my major complaint against Java. I don't need a language that holds my hand all day, thanks. I know what I'm doing.

  2. Design for Yourself and Your Friends - this is equivalent to the point made in the quote above, I guess, but he thought it was worth making twice, and so do I. A good operational definition of a good programming language is that it is one that good programmers will want to use. If you're cool enough to be designing one, then you probably qualify as a "good programmer," and lots of your friends probably do too. So make what you like, not what you think other people might want.

  3. Give the Programmer as Much Control as Possible - Again, I stand up and cheer. I don't necessarily mind a language building in some safeguards here and there, but it needs to at least leave me an escape hatch. I touched on this some in my discussion of Scheme and ML. I'm new to ML, so if I got this wrong, apologies - but one of the frustrating things about ML in relation to Scheme is that it doesn't seem to wanna let you out of lexical scope. Variables are persistent and can't be changed. I tend to agree that this is a good idea, but I like that Scheme gives me set! so that I can change variables in a pinch if that's what's required. I also like that it gives me call-cc. Not that I usually need it, but it's nice that it's there just in case. Languages should aim for maximum expressive power. Again, I don't mind if they cater to one programming philosophy over another. Just - at the end of the day, I need to be able to say everything that I could possibly need to say, you know?

  4. Aim for Brevity - here I jump ship a bit. I do indeed think that brevity is underestimated in language design (talking here about brevity of expression in the syntax and keywords, not necessarily in the language specification a la Scheme) - especially by a certain industry whore. But if Perl has taught us nothing else, it's taught us that this can be taken too far. TMTOWTDI may be nice for impressing your friends with how quickly you can type in a working line, but come back after an hour and even you won't be able to read that string of symbols on the screen you call a "program." Brevity is nice, but not at the expense of readability and uniformity. I would think it's more accurate to say that there's a balance here. Ruby seems to have struck that balance nicely - though I still think Python is the readability and learnability king.

  5. Admit What Hacking Is - I have to grudgingly give him this point:

    A lot of people wish that hacking was mathematics, or at least something like a natural science. I think hacking is more like architecture.

    Right. Unfortunately, I'm one of those people who has a tendency to fall into the "programs are proofs" trap of functional programming. I like to think that programming is mathematics, I confess. But I know it's wrong. Programming is like architecture, right. It's about building things. The functional meets the sublime. There's an aspect of it that speaks to "elegance" and such, but it's not fully an art. There's an aspect of it that needs to be practical, but it's also not fully an engineering task. It's both. Architecture, not math. Right. As much as I hate to admit it - that's right.

  6. Are People Really Scared of Prefix Syntax? - This may be the line that really grabbed me about this list - let me know it was something unusual. I've often wondered about this myself. Prefix syntax, for the uninitiated, is what Scheme and Lisp use. Rather than y = x + 1, Scheme and Lisp say things like (+ x 1). I really like prefix syntax, I'll admit it. It's natural to think of operators as functions - and nice to apply things like addition in the same way that you apply functions. So maybe I'm biased against those who don't - but I do often wonder if people have some irrational objection to it.

  7. A Language Has to Be Good for Writing Throwaway Programs - Completely agree. Python is highly addictive for this reason. And Java and C/C++ offputting, for the same reason. Language designers need to keep in mind that desktop computers are now ubiquitous and that few people bother having formerly ordinary things like calculators anymore. Lots of "one-off" programming gets done, in other words, and people who do a lot of this appreciate and return to languages that don't make it a chore.

  8. Syntax Is Connected to Semantics - yes, I think so too. And I do much prefer languages whose syntax reflects in some intuitive sense the operation that's going on.

In another section entitled "Ideas Whose Time Has Returned" he lists, among others:

  1. Efficiency - He writes:

    Recently it was starting to seem that computers were finally fast enough. More and more we were starting to hear about byte code, which implies to me at least that we feel we have cycles to spare. But I don't think we will, with server-based software. Someone is going to have to pay for the servers that the software runs on, and the number of users they can support per machine will be the divisor of their capital cost.

    Indeed. I also believe that efficiency will start to become more of a concern again - and also largely for this reason. I often wondered if maybe it wasn't just because I was biased against Java (because the Java people are the ones who repeat this meme the loudest). It's nice to hear someone else say this.

In a section on Pitfalls and Gotchas

  1. Object-Oriented Programming -

    I realize this is a controversial one, but I don't think object-oriented programming is such a big deal. I think it is a fine model for certain kinds of applications that need that specific kind of data structure, like window systems, simulations, and cad programs. But I don't see why it ought to be the model for all programming.

    Couldn't have said it better myself, and couldn't agree more. Right - OOP is useful (indispensible, really) for certain kinds of applications. But it isn't the grand paradigm shift people like to think of it as. It's a highly organized nameing scheme, nothing more. It allows programs to be organized in a highly readable way, and allows you to violate scope rules in something like a disciplined fashion. These are nice for giant, graphics-based projects. It's completely useless, and even counterproductive, for day-to-day stuff. I agree that the Language of the Future should support OOP, but I completely reject the kind of approach that Ruby has taken, where absolutely everything (including the counter in a loop!) is an object. This is just silly. Objects are nice, but they have their place. Some things are more intuitive the old-fashioned way.

    But this line is pure beauty:

    I think part of the reason people in big companies like object-oriented programming is because it yields a lot of what looks like work. Something that might naturally be represented as, say, a list of integers, can now be represented as a class with all kinds of scaffolding and hustle and bustle.

    HA! Right.
  2. Design by Committee - Yes indeed! All the best languages have identifiable developers. Python, Ruby, Scheme, C++ (though not C). All the crappy ones - JAVA, Objective-C, OCaml - were designed by teams. Perl stands out here, though. Definitely has a single designer, but I can't stand it. However, I guess this is just personal taste. A lot of otherwise cool people program in Perl and love it. And it did bring us cool regular expressions, I have to admit. But Matz, Soustroup and Guido are all cooler than Larry Wall, so I'm still mostly right!

For what it's worth...

Halfway House

Sometimes I think I am too hard on the IDS. True, their commentary is often (usually?) infantile, silly. But if they chew on an issue long enough, they do sometimes start to - well, I was going to say "cut through the fog," but maybe "nick at the fog a bit" would be a better way to put it.

Today's staff editorial is a case in point. The subject has to do with IU's current top non-issue: whether or not we should admit more black people. I call this a non-issue because the concern is unwarranted: any honest look at the numbers makes it clear that IU is doing all it can (which is more than it should, really) to get black students here. And the opinion? Well, that's what's interesting.

I wouldn't exactly call it deep or insightful. They repeat, unsurprisingly, the mantra that "diversity" (whatever that means) is a worthy goal in and of itself, and this without any reference to any concrete benefits from it. They buy into the idea that a drop in black students raises questions of devotion to "diversity," as though any cogent definition of "diversity" could center on the representation of a single group. They fail to mention that enrollment for other minority groups is sharply up this year, which would seem to make the case that "diversity" at IU increased rather than decreased (assuming you think diversity is only about race - which, given most of its proponents, is a fair assumption). In short, a typical chump's accolade.

Except for one point - and this is what I was surprised to see:

And, yet, something that doesn't get as much emphasis is this: The burden of cultivating a diverse campus lies just as much with the students as it does on the administration, perhaps more so.

Wow. Not that the authors probably fully appreciate the significance of this remark, mind you, but that's a pretty clear acknowledgement of the idea that numbers don't matter. Which is an elementary and obvious truth about "diversity," when you think about it. I mean, if the purpose of "diversity" is indeed to prepare students for work in the global economy, or what have you, then all that is actually required is some kind of threshold of exposure. It doesn't actually matter that the various "minority" groups are represented to any specific quantity - just so long as there are enough of them to present themselves as a viable part of the community - so that IU's future business leaders can hear some rap slang or eat lo mein or whatever is required to prepare them for the global economy. Really, 13% of IU could be black, just like the general population, but if no one interacted, then it wouldn't matter a whit to "diversity," right? Contrarywise, a highly active 2% black representation might be better exposure than an insular 13%.

But I don't want too make to much out of this. The rest of the editorial was the usual tripe.

Is IU turning into just another white-dominated institution? A Sept. 15 Indiana Daily Student story concerning a lower amount of black student enrollment this year has people asking whether IU is becoming less diverse.

Leaving aside those pesky increases in representation by other minorities (remember the other minorities? They're not white either, FYI) - IU is a white-dominated institution. It doesn't "become" one because 3.4% of the student body is black this year as opposed to 3.8% last year.

First, statistics require precision and care. Becoming overtly alarmed by a one-time enrollment drop is probably an overreaction. As Dean of Students Richard McKaig pointed out in last Wednesday's IDS, a trend of several years must be examined.

This one had me in stitches. Yes, kids, statistics require precision and care. But remembering to look for a long-term trend rather than just examining the current and previous years does not. All that needs is any kind of knowlege of statistics at all - intro-level stuff, really.

And so on and so on. It really is like Beavis and Butthead where the lightbulb kinda fizzles and pops on rather than snapping on with a *DING*! But at least today I got some kind of indication that there's a mild current in the wire...

Sunday, September 24, 2006

Compiling to Reality

I haven't blogged much this week about Philosophical Foundations. Partly this is a function of outside time commitments, but it's also got something to do with a general lack of interest in some of the points being debated in class this week.

This week's readings had mostly to do with Functionalism and responses to it. On the whole, I find Functionalism convincing, and the responses to it not so much- which is probably why it's hard for me to get very worked up about the in-class debates.

But let me say a couple of things about the one on Monday. Near the end of class, things kind of degenerated because one person basically refused to be convinced that there is any need to study things at a level that abstracts away from the physical.

For what it's worth, I understand some of his concerns. There is something kind of chessy about Functionalism. One does get the feeling that Functionalists are refusing to fill in some important blanks. And indeed, this has mostly to do with the fact that they never seem to do much more than identify the necessity of some kind of "functional" level of explanation. So, in these terms, something like "belief" isn't readily explicable in terms of quarks. It would seem to be the kind of thing that can be instantiated in many possible ways. We don't expect everyone's brain to contain exactly the same neurons to exactly the same dimensions with exactly the same patterns of connections, and yet we can speak meaningfully about all these different brains having "beliefs." Whatever the common denominator is, it doesn't seem to be (entirely) physical; a better way to explain it might be in terms of what functions are performed. In other words, "belief" is a kind of program that can run on many different hardware configurations. And for the functionalists, these different hardware systems don't even have to be subtly different. They can be radically different: provided they all implement the same algorithm, we're good to talk about "belief."

Well fine, but what's said to be left out here is anything beyond this observation. Having decided that brain state algorithms are multiply realizable is all well and good, but it's still not clear what, exactly, it is that accounts for the similarities across individuals. Do we really want to commit to a view of the world that says that what is important about things like intelligence and so on are ghostly algorithmic laws that are, one presumes, irreducable and cannot be subjected to scientific experiments? What force is it that drives these laws? And, perhaps more importantly, how do we identify universals in these laws? How do we study them? Does it do us any scientific good to adopt this worldview?

As for the question of "what" the laws are and "what drives them," I don't really personally consider that to be very interesting. I am simply noting it here to demonstrate a certain amount of sympathy with people who don't buy into Functionalism. I don't honestly see why this should be any more of a puzzle than questions of why the physical laws of the universe are the way they are. No one really knows why it is the case, for example, that motion is transfered from one object to the next. We can spin theories about objects composed of atoms and these atoms emitting force fields that move the atoms in the other objects and so on, but at some level the answer still comes down to postulates. At some level the universe simply is constructed in such-and-such a way, and there is nothing more to be said about it. From a purely historical view, it would seem unlikely that the nascent Science of Mind has actually hit this kind of level - but I see no reason in principle to rule it out. It may be that nothing more can be said about functions as such except that the universe happens to be so constructed that, e.g., the Lambda Calculus accurately describes computation, or whatever.

As for the rest of it, I am a bit frustrated that the Functionalists seem satisfied saying that there is a functional level but don't want to probe much deeper as to what the nature of the functional level is, or if this knowledge can ever be put to good (maybe psychological) use, or what have you. Nevertheless, I do think the basic idea is sound.

Now, the debate in class had to do with whether the Functionalists ultimately want their theories to be physically realizable or not. Of course 98% of Functionalists are probably also some kind of materialist or another - and they do indeed believe that the physical component is indispensable for an object's existence in reality. We can talk all day about multiple realizations of chairs, for example, but there still has to be a physical chair present in order for anyone to sit in it, or see it, or clean it, etc. The concern from the student in question, though, was whether Functionalists were giving up too soon. At some level, if everything has to be physical, then shouldn't we just keep plugging ahead with science until we are indeed able to describe everything in terms of quarks?

And this is where I don't really have much to say in response. It seems completely obvious to me that we will never be able to describe everything in terms of quarks, even though quite probably everything in the universe ultimately is realized in terms of them (or whatever the new sub-sub-atomic particle of the week happens to be).

This can be seen from a number of perspectives:

  • Informational - there is simply no way to represent such knowledge to ourselves. If we were to attempt to study, say, someone's brain on the molecular level and thus give a completely scientifically-deterministic account of that person's future actions and so on, we would need to specify a number of variables potentially greater than the number of extant particles in the universe. Not, of course, to list what's going on in the person's brain (for that we would only need the particles that actually are in the person's brain ... though of course delimiting the "brain" on the level of quarks is certainly an unsolvalble problem, I should think. This quark is part of the "skull," this one the "brain," this one the fluid near the skull, etc.??? Can't be done.) - but to list which arrangements of quark correspond to which functions, and also to know how these different states affect each other in regular ways, etc.

  • Explanatory - even assuming we were capable of somehow storing all this information, it still wouldn't qualify as "science" I don't think. Science as I understand it is a process of generalization. Simply presenting the brain as is doesn't count. If the "explanation" is ultimately nothing more than noting the position of every quark, well then we've already got our explanation. We can simply point to the already-existing brain and be done. Naturally this isn't a very satisfying conclusion, though, because it's clear that it doesn't explain anything to just point to things as they are. We're interested in capturing general laws, regularities, etc. It's not clear what predictive power some kind of massive matrix representation of all the quarks that make up a person's brain would have. But it is clear that understanding things on a functional level has predicitve power (at least as regards the mind) - and that it is considerably more efficient than trying to store the entire quark matrix somewhere and perform operations on it.

  • Feasibility - The idea of explaining everything at the physical level doesn't seem intuitively feasible anyway. Partly this is for the reason above - that it isn't always clear, say, which quarks are involved in "brain" and which are in "skull" and which are in "love" and which are in "cat" and so on. In fact, it would probably never be clear. Most of these quarks would be simultaneously participating in many such operations. Which sort of makes one wonder whether it would ever be feasible to write out all the "rules." Would we need rules that specified situations when the surrounding quarks were involved in "cat" or "breathe" while we're trying to determine whether a person is in love? Or maybe the person loves their cat. Etc. etc. Of course there's no way to know until we are at that level whether this problem can be dealt with (or if it even is a problem) - but until we get there, the idea of having to keep track not only of where all the quarks were and what they were doing, but whether they weren't also involved in other things and what that meant for what we're trying to determine? The data explosion is immense.

I like to think of it as compiling, in a sense. We program according to general principles. No one (or, rather, very very few people) programs at the level of bits. There was a time when people did literally flip switches and populate registers and call it a program, but with the levels of computation we expect from our machines today, it simple isn't possible. So what we do instead is have a program called a compiler that automates the process of transforming our higher-level principled programming languages into machine instructions, and those instructions tell the machine what to do with the bits. Opening up a computer and trying to fathom what it was doing by noting, at each processor cycle, which bits were set and which were off, would be a herculean task of epic proportions. It's not clear that it would even be possible. Hence the need for the compiler. Brain functions could easily be analogous. No doubt there are some important differences, but in some sense the functions that make up the thought process are compiled into thought primitives which are compiled into neurons which are compiled into chemicals which are compiled into atoms into particles into quarks STOP. Naturally we're stretching the term "compile" here (because there is no actually process - the instantiation is concurrent with the program - the same thing, ultimately) - but it's a better way of understanding it, I think, than trying to know the brain by looking at the quarks. We assume that the brain is ultimately realized in terms of quarks, but it cannot be understood on that level.

And that's really all I would have to say about that. Probably I'm missing endless amounts of subtlety, but it's one of those questions that seems ultimately useless. We are, in any case, not in a position to study the brain as a function of quarks, nor does it seem likely that we will ever be. And so I can't really get that excited about this debate.

Disciples of All Nations

One recurring theme on this blog is the need for Schemers to get out and preach a bit more about the virtues of their language.

We got another good example of this not happening on Thursday. Friedman followed up on Gilead's lecture on scope and took it a bit further - over my head in fact. He explained something about how dynamic scoping rules applied to a rather obscure version of the factorial function we were given in an assignment - reproduced here:

(define fact-5
(((lambda (!)
(lambda (n)
((! !) n)))
(lambda (!)
(lambda (n)
(if (zero? n)
(* n ((! !) (sub1 n)))))))

And that actually does return 120 if you put it in your Scheme interpreter, which to me is nothing short of a miracle. But that's because I don't yet fully understand the Y combinator, on which this is based.

So what's the Y combinator? Well, you could look it up at the Wikipedia article linked above - but that requires a bit of head-work. A better tutorial (in Scheme!) is here - which shows you how we arrive at it. The overall point of it, though, is to allow for recursive functions in the Untyped Lambda Calculus. The reason this is cool (and magic, actually) is because the Lambda Calculus - since it doesn't have any named functions - doesn't have an obvious way to do recursion. The Y-Combinator is the necessary magic. It's an expression in the Lambda Calculus that makes a non-recursive function behave as though it were recursive. So naturally it's absolutely essential to proving that the Lambda Calculus is complete - that is, that it can do all the computations that a Turing Machine can do. More to the point, it also proves that we don't really need dynamic scope to get recursion.

I'll let anyone who's interested read the tutorial. It's one of those concepts that starts to make sense slowly. You absolutely can't understand it by working everything out. It's best to find someone or something to explain it to you conceptually. And then just let it sink in. The first time you go through it you'll probably feel like you "mostly" got it, but might not have confidence in being able to apply it. As you think about it over the course of a couple of days, it starts to make more concrete sense. Eventually you get to the point I'm at now, which is where you have an intuitive grasp of why it's absolutely essential, but you still wouldn't have a whole lot of confidence deriving anything with it in a formal proof.

Well, Friedman seemed to think that scope had something to do with this - which, of course, it does. Because what we're trying to do is define recursion without dynamic scope, and that would seem to be impossible (because we're not allowed to clobber variables, etc.). I'm not going to go into any detail about this until I've fully got my head around it (which should be soon, but not today) - and it anyway isn't really the point of this.

The point is that Friedman made an offhand comment when he came in the room about Object Oriented Programming, which he hates. He first asked the class if anyone thought dynamic scope was a good idea. And no one raised their hands, so he was satisfied that Gilead had done a good job spreading the Gospel. Then he said something like "Good, because dynamic scope is bad. It's almost as bad as Object Oriented Programming." Which he naturally did to get a rise - since that's an obvious stab at Java - the "language of the future." A lot of people chuckled nervously. And what else could they do? On the one hand, Friedman has just told them that Java (which most of them probably program in daily) is garbage, and they think they know for a fact that it's not. Java is what made programming clear and easy to them, after all - what enabled them to quickly get applications running without spending hours debugging, like they used to hafta do in C/C++. So this statement would seem to contradict everyday reality. But on the other hand, he's Friedman, and he's smarter than them, and they're simply too intimidated to challenge it. So you just give a nervous laugh instead and hope the barking dog goes away and finds something else to play with.

I meant to say something about this, but I didn't remember it until just now. I'm reading this paper on the Y Combinator (what else?), and it has this line:

fun Y f x = f (Y f) x

This definition is frequently given to students on undergraduate programming courses, and some even deal with the lambda-calculus version. There does not, however, appear to be any evidence that this is widely known to computing professionals.

And here, again, we have a nice demonstration of The Problem.

Friedman hates Object-Oriented Programming because he thinks it's an undisciplined way to violate lexical scoping rules. You store variables in an object, pass the object to a function, and suddenly scope goes out the window because, of course, the variables in the object really belong in the global namespace and not in this function. But of course, global variables are visible from within functions, right? Yes, true, but the problem here is that they can be modified through the object - and there is no reason at all why more than one function call couldn't have been made passing the same object as argument at the same point in the code. So you do indeed quickly lose track of which variables can be modified from where.

The counterargument, of course, is that OOP buys you something by keeping all these variables separate. That is, it's a complicated naming scheme. It captures some generalities about what you're doing in the code by allowing you to call similar variables by similar names without succumbing to your human tendency to mix and match them by accident. And of course Java builds all kinds of things on top of the code you write yourself to cut down on the potential for you to screw up and clobber something you didn't mean to. So in a practical sense, I don't buy Friedman's argument. Yes, he's right that OOP allows you to violate scope rules willy-nilly, but OOP (well, Java, anyway) also has a lot of stuff added in to keep you from hurting yourself. And whether Friedman likes it or not, the naming convention that it introduces (it doesn't encourage you to think of this as a naming convention - it wants you to think that something magic is going on where "associated data and operations are stored together," but that's bollocks and all it actually is is an enforced naming convention) is quite useful - for people. So in a practical sense, Friedman's wrong.

But in principle he's right - and the quoted line illustrates the problem nicely. The problem is that the Y Combinator is something that students just learn for the test and promptly forget once they're done with it. They go off to industry and Java does all their work for them, and this is bad for everyone because (a) they get paid less than they potentially could be worth with proper training and (b) software gets developed more slowly - because no one understands how to derive new concepts from old concepts, they just know how to throw old concepts at new problems, and this is because they have no abstract-level understanding of what they're doing when they're programming. (And in reality, not only does software get developed more slowly, but the software that does get developed in Java is also crappy slow.)

I read an interesting post that predicts that industry will eventually have to switch to purely functional programming languages. Of course it's mean to be partly tongue-in-cheek (check the footnote at the bottom if it doesn't seem so) - but there's a grain of seriousness to it too. Functional programming languages are far more efficient at the things that matter - transforming programs on a human-understandable level, programming at the level of stable concepts, deriving (as opposed to discovering) more efficient ways to do things, etc. True, they're not as implementation-efficient (C/C++ is still the king here - aptly described as portable assembly language.), but they more than make up for that in being human-efficient and programming-efficient (and anyway they are usually easy to compile, so the machine can make up for most of the lost efficiency in the end).

I do believe there is a niche for this stuff, and believe, in fact, that it is entirely possible that functional programming could take over. It doesn't seem very likely, though, and that's because, as Friedman repeatedly says, the message isn't getting out.

Well, that's the part I don't understand. Why isn't the mesage getting out? What's necessary to get it out? How do functional programmers get a foothold in industry?

I think one thing that will have to change is that the Object Oriented fad will need to die out. Friedman's right that you don't really need it. A properly-trained programmer who knows about things like the Y Combinator also knows how to deal with scope barriers in principled ways without having to resort to cheesy glorified naming conventions pretending to be programming evolution. But it seems to be taking industry a long time to figure that out.

I think in a lot of ways OOP is the Optimality Theory of programming. It organizes programs on a level that's actually probably higher than programming. Sort of bypasses everything that actually matters and states obvious truths as though they were profound insights. Organizes but doesn't explain, etc. But that's a rant for another day.

It's interesting to me that lots of people seem to think we're overdue for a langage change. C/C++ and Java - the giants of the 90s (and 80s, in C's case) - seem to have overstayed their welcomes. People are well ready to get rid of them. And so we're at a point where a functional language - like Haskell or Scheme - could start to take over. I'm not holding my breath (I largely buy the case for Ruby being the Next Big Thing, though somehow I think something will trip it up at the last minute. Ruby will definitely be the new Perl, though - since Perl is (thankfully) on its way out.), but it would be really nice if it happened.

It's just that - to beat the dead horse once again - I really don't think it's going to happen unless Friedman et al change their teaching style. They're going to have to start demonstrating that Scheme is useful beyond the classroom - and so far he hasn't provided any evidence. Rather than having us reinvent the wheel (by, e.g. , re-deriving addition, subtraction and multiplication for Church Numerals or trying to derive the Y Combinator on our own (!!!) ) every week, a better idea might be to just outright show us the cool stuff and spend more time having us apply it to, say, transforming Scheme programs into executable C code. That would well demonstrate that Scheme is a skill you can take to work with you, not just something cool to play with as a grad student.

(Linked from the above post about Ruby - here's a very sensible look at what features the programming language of the future will have. It doesn't translate to automatic success for Haskell (or Scheme!!!), but it does show that they're at least contenders, something no one would have believed even 3 years ago.)