Monday, June 11, 2007

Some Thoughts on Cross-Linguistic Reading Comprehension

Something I heard three years ago has come up again in conversation recently - the idea that Japanese speakers read faster than English speakers on account of the writing system.

Crash course in Japanese writing (for those who don't already know). Japanese is really three writing systems smashed into one. Two of them are phonetic - based on syllables, and one is logographic - a system of characters borrowed from Chinese. The phonetic systems are used for inflection, words you don't know the characters for, and loanwords or foreign words (sometimes also for emphasis, in a manner similar to itallics in English). They use all three systems in mixed fashion in sentences. The professor who made this claim - that Japanese read faster - based it not on the phonetic systems, but rather on the idea that characters are easier for people to recognize than strings of alphabetic letters.

This is one of those weird things where I have a hunch the overall conclusion is right, but I'm nevertheless skeptical about how they got there and what it all means. Japanese researchers (sorry, but it's true) in general have a bad habit of publishing lazy research that "proves" that some aspect or another of the Japanese culture or language is generally superior to the rest of the world - so just on general prejudice I wouldn't be at all surprised to find problems in these results. But even if the results are right (and as I said, I have a hunch they are), I can think of any number of confounds off the top of my head that might go further in explaining the results than rashly concluding that characters are "easier to read" than strings of letters. To name a few:


  1. Practice Makes Perfect - Since the Japanese writing system requires a lot more effort to learn than a straight alphabetic system, I would guess that the Japanese in general spend more time practicing it during the critical period. This may account for the greater rapidity with which they apparently read more than the design of the system itself.

  2. Assessment Issues - I would be really interested to know what counts as "comprehension" in this task. It's important to say - because it seems plausible to me that it would be easier to "skim" in Japanese, but that English readers might pay more attention to content detail in general than Japanese readers. This is beacuse the Japanese writing system divorces sound and meaning to a great degree - and so readers are more free to pay selective attention to the islands of "meaning" (the first character of each compound) in the text and skip all the "irrelevant" hiragana grammatical information. Since characters provide "skimming" anchors, and since the skimming task itself is uncomplicated by the need to pay parallel attention to phonetic form in Japanese, it may be that Japanese readers have marked advantage at skimming simple texts but are handicapped at reading more complex texts that require greater attention to detail. We would need to know how comprehension was assessed in the relevant experiment in any case.

  3. Methodology Problems - Of course, anyone with scientific training is going to want to know how they controlled for effective literacy level across native speakers of the two languages. This is nontrivial - since what counts as "literacy" in Japanese is a much more slippery concept than what counts as "literacy" in an alphabetic system.



Anyway, a friend and I started discussing this in some detail (involving Chinese, though, which I am currently learning, and not Japanese), leading me to eventually make the claim that Chinese characters can be thought of as "boxed letters" and her to counter that even so, Chinese people do not read them by "unpacking" them, but rather by looking "at the whole."

Well, sure, I overstated my case. Chinese readers are indeed trained to see characters as individual units, and they learn them by copying them hundreds of times. Notwithstanding, I'm skeptical of claims that things can be identified "as a whole."

Surely the process of identifying a Chinese character is largely analogous to that of identifying an alphabetic word, no? That is, you're presented with a jumble of details, and you focus on details in turn, using them to eliminate competing candidates, until you reach a point where you have eliminated all candidates (and thus failed to recognize the character/word), or else you've narrowed it down to one (at which point you accept it and move on). And in this sense I think recognizing a Chinese character can't be too terribly different from recognizing an English word - since, after all, characters have recognizable component parts that recur across characters; Chinese has a system of graphemes too!

The question is really whether encouraging readers to look at component parts in bundles makes the identification task easier than encouraging readers to look at component parts in linear order. A related question is whether divorcing the phonological component (for the purposes of recognition, mind you) also frees up some processing time.

Intuitively, it seems highly plausible that the Chinese system does, in fact, facilitate rapid recognition. Since a Chinese reader is presented with "all information at once," he may be freer to focus on the salient parts - the ones that really matter for recognition tasks. An alphabetic reader, by constrast, may be forced to spend undue time focusing on letters that are not really helpful in the particular instance (e.g. in "stone" vs. "stole," the distinguishing letter comes rather late in the process). Chinese readers may furthermore be in a better position to take advantage of sub-graphemic information - such as individual character strokes - than readers of alphabetic systems since the alphabetic system imposes more of a requirement that processing be done as a series of sub-tasks involving character recognition. And in this sense, there is indeed something to what she says that Chinese readers are "looking at the whole."

I would stress, though, that component parts of some kind are still involved in the recognition task - and so today I went trolling for information to back this up. I ran across this interesting article - called "Effects of minimal legible size characters on Chinese word recognition."

The authors have developed an interesting concept called "minimal size" for Chinese characters, which basically states that there is a certain minimal size for each character under which recognition time will degrade. Interestingly - though perhaps not surprisingly - this size differs across characters. Naturally, there is a frequency effect - so more frequent characters are easier to recognize across the board. But controlling for frequency, there is apparently also a "complexity" effect, whereby the "minimal size" of characters that involve more strokes is greater than the "minimal size" for characters involving fewer strokes. (The authors then suggest that short texts in Chinese where writing aesthetics are not important - such as warning signs and instruction manuals - should take advantage of this by printing the characters in different sizes relative to their "minimal size." Apparently, testing on reading speed on such texts yields encouraging results!) Somewhere in the paper they cite an apparently widely-accepted result that says that readers need on average 4.6msec for each stroke in a character.

I found all this rather interesting - because it supports my assertion that Chinese readers do indeed "unpack" their characters in some sense when they are reading them. (Emphasis here on "supports." It doesn't PROVE it because it's still sort of mysterious what exact process Chinese readers use for character recognition. This might simply be a correlation or third-variable issue.)

Another interesting article I found was this one - called "The Science of Word Recognition." It's a very good general overview of some of the research into how alphabetic readers perform word recognition tasks. For some time, apparently, researchers were led down a garden path by information that seemed to support a "word-shape" hypothesis - namely that readers pay attention to the overall shape of the word more than the individual letters involved. He then goes through all the evidence for this (which is superficially quite convincing) and demonstrates that more convincing alernate explanations are available for each point. This apparently led to some support for a linear activation model - whereby people really do scan a word left-to-right and build hypotheses as they encounter new letters - much the way a spell-checker works. Several lines of research have also laid this assumption to rest, however, and the general consensus now is that people see all the letters in parallel in some sense, though there is still a preference for letters just to the right of the point of focus, which is apparently somehwere just to the left of the middle of the word.

Distilling all this down - it seems like alphabetic reading (at least in English) IS linear in some sense - but maybe not to the extent one might expect.

In other words - it's a purely empricial question which of character-based and phonetic systems facilitate recognition more. I, for one, would be very interested in knowing the answer - though as I said, I suspect that it's true that characters are easier to recognize than strings of letters.

(My friend mentioned something about Korean - which really DOES "box" its letters into characters. It would be especially interesting to compare Korean reading rates to those of English and Chinese. I suspect Korean wins.)

There isn't really an overall point here - just that I find such information-theoretic linguistic questions fascinating. Presumably what is going on with "minimal size effects" in Chinese has an explanation in straightforward, Shannon-esque terms. The more "information" in a character, the harder it is to identify - because this is an effect of playing subconscious "20-questions" with it. What I'm curious about is whether the well-known Zipf frequency effects come into play here: are more frequent characters also less likely to have large numbers of strokes? I mean, trivially in any corpus study we would find that they do - just because of the particle "le" (which only has one stroke and is hugely common) and because family names tend to have complex characters. But controlling for those two? I wonder... There's a good case that you might not - because for independent reasons I would imagine that a logographic system is more resistant to change than a purely phonetic one, leading to a possible retention of complex characters in uses that increased in frequency over time over and above what one would expect in alphabetic systems.

But no answers - the studies either have yet to be done, or else (more likely) I'm simply unaware of them. In any case, while I suspect that Chinese characters ARE easier to identify than strings of alphabetic words, I'm not convinced that we really know why, or that our hunches about why are necessarily correct. It's an interesting field for future study.

Another interesting question for me is how English word-recognition times fare with respect to more regular languages that have longer words on average than English - like Dutch and Finnish. I suspect that in spite of the longer sequences, Dutch and Finnish readers are faster because the more reliable phonetic information aids them. English - which has a fairly messy disconect here (which is unfortuantely not total, as in Chinese, in which case it could arguably be an advantage) - is probably unnecessarily confusing.

0 Comments:

Post a Comment

<< Home