Compensation for kanji-input basic technology subject of lawsuit

A Japanese man who says he invented the technology behind the context-based conversion of a sentence written solely in kana into one in both kanji and kana, as well as another related technology, filed suit against Toshiba on December 7, seeking some US$2.3 million in compensation from his former employer.

Shinya Amano, a professor at Shonan Institute of Technology, said in a written complaint that although the firm received patents for the technologies in conjunction with him and three others and paid him tens of thousands of yen annually in remuneration, he actually developed the technologies alone.

Amano is claiming 10 percent of an estimated ¥2.6 billion in profit Toshiba made in 1996 and 1997 — much higher than the roughly ¥230,000 he was actually awarded for the work over the two-year span.

His claim is believed valid, taking into account the statute of limitations and the terms of the patents.

“This is not about the sum of the money — I filed the suit for my honor,” Amano said in a press conference after bringing the case to the Tokyo District Court.

“Japan is a technology-oriented country, but engineers are treated too lightly here,” he said.

Toshiba said through its public relations office that it believes it paid Amano fair compensation in line with company policy. The company declined to comment on the lawsuit before receiving the complaint in writing.

Amano claims that he invented the technology that converts a sentence composed of kana alone into a sentence composed of both kanji and kana by assessing its context, and another technology needed to prioritize kanji previously used in such conversions.

Using theories of artificial intelligence, the two technologies developed in 1977 and 1978 are still used today in most Japanese word-processing software, he said.

source: Word-processor inventor sues Toshiba over redress, Kyodo News, via Japan Times, December 9, 2007

stroke counts: Taiwan vs. China

One of the myths about Chinese characters is that for each character there is One True Way and One True Way Only for it to be written, with a specific number of specific strokes in a certain specific and invariable order. Generally speaking, characters are indeed taught with standard stroke orders with certain numbers of strokes (the patterns help make it less difficult to remember how characters are written) — but these can vary from place to place, though the characters may look the same. Moreover, people often write characters in their own fashion, though they may not always be aware of this.

Michael Kaplan of Microsoft recently examined the stroke data from standards bodies in China for all 70,195 “ideographs” [sic] in Unicode 5.0 and compared it against “the 54,195 ideographs for which stroke count data was provided by Taiwan standards bodies” to see how how much of a difference there was in the stroke counts for the characters that both sides provided data for.

(I’m a bit surprised the two sides have compiled such extensive lists, and I’d love to see them. But that’s another matter.)

He found that 9,768 of these characters (18 percent) have different stroke counts between the two standards, with 9,045 characters differing by 1 stroke, 675 characters by 2 strokes, 44 characters by 3 strokes, 2 characters by 4 strokes, 1 character by 5 strokes, and 1 character by 6 strokes.

Note: This is about stroke counts of matching characters, not about differing stroke counts for traditional and “simplified” characters — e.g., not 國 (11 strokes) vs 国 (8 strokes).

So, is this a case of chabuduoism, or of truly differing standards? The answer is not yet fully clear; but be sure to read Kaplan’s post and the comments there.

sources and additional info:

variant Chinese characters and Unicode

A submission to the Unicode Consortium’s Ideographic [sic] Variation Database for the “Combined registration of the Adobe-Japan1 collection and of sequences in that collection” is available for review through November 25. This submission, PRI 108, is a revision of PRI 98.

This set “enumerates 23,058 glyphs” and contains 14,664 tetragraphs (Chinese characters / kanji). About three quarters of Unicode pertains to Chinese characters.

Two sets of charts are available: the complete one (4.4 MB PDF), which shows all the submitted sequences, and the partial one (776 KB PDF), which shows “only the characters for which multiple sequences are submitted.”

Below is a more or less random sample of some of the tetragraphs.

Initially I was going to combine this announcement with a rant against Unicode’s continued misuse of the term “ideographic.” But I’ve decided to save that for a separate post.

sample image of some of the kanji variants in the proposal