top | item 42936085

Show HN: Mandarin Word Segmenter with Translation

48 points| routerl | 1 year ago |mandobot.netlify.app

I've built mandoBot, a web app that segments and translates Mandarin Chinese text. This is a Django API (using Django-Ninja and PostgreSQL) and a NextJS front-end (with Typescript and Chakra). For a sample of what this app does, head to https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y. This is my presentation of the first chapter of a classic story from the Republican era of Chinese fiction, Diary of a Madman by Lu Xun. Other chapters are located in the "Reading Room" section of the app.

This app exists because reading Mandarin is very hard for learners (like me), since Mandarin text does not separate words using spaces in the same way Western languages do. But extensive reading is the most effective way to learn vocabulary and grammar. Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.

I'm solving this problem by allowing users to input Mandarin text, which is then computationally segmented and machine translated by my server, which also adds dictionary definitions for each word and character. The hard part is the segmentation: it turns out that "Chinese Word Segmentation"[0] is the central problem in Chinese Natural Language Processing; no current solutions reach 100% accuracy, whether they're from Stanford[1], Academia Sinica[2], or Tsing Hua University[3]. This includes every LLM currently available.

I could talk about this for hours, but the bottom line is that this app is a way to develop my full-stack skills; the backend should be fast, accurate, secure, well-tested, and well-documented, and the front-end should be pretty, secure, well-tested, responsive, and accessible. I am the sole developer, and I'm open to any comments and suggestions: roberto.loja+hn@gmail.com

Thanks HN!

[0] https://en.wikipedia.org/wiki/Chinese_word-segmented_writing

[1] https://nlp.stanford.edu/software/segmenter.shtml

[2] https://ckip.iis.sinica.edu.tw/project/ws

[3] http://thulac.thunlp.org/

35 comments

order

gwd|1 year ago

> Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.

That's not been my experience at all: As long as the content I'm reading is at the right level, I've been able to learn to segment as my vocabulary has grown, and there's always only a few new words that I haven't learned how to recognize in-context yet. Having a good built-in dictionary wherever you're reading (e.g., a Chrome plugin, or Pleco, or whatever) has been helpful here.

My fear would be that the longer you put off learning to segment in your head, the harder it will be.

My advice for this would be that you present the text as you'd normally see it (e.g., no segmentation), but add aids to help learners see or understand the segmentation. At very least you could have the dictionary pop-up be on the level of the full segmentation, rather than individual characters; and you could consider having it so that as you mouse over a character, it draws a little line under / border around the characters in the same segment. That could allow you to give your brain the little hint it needs to "see" the words segmented "in situ".

AlchemistCamp|1 year ago

I had yet another experience. Before Chrome existed, I learned thousands of words by hearing them before I could read them.

A lot of the pain I see in foreigners learning Chinese is that they try to tackle the written language too early. Actually, I think that's sub-optimal in any language, but it's even more expensive in a language like Chinese or Thai where word segmentation isn't a part of the writing system. I totally get it since characters are cool and curiosity about them draws many learners to the language, but it's a lot easier to take on one challenge at a time!

imron|1 year ago

Nice work OP.

I’ve done a fair amount of Chinese language segmentation programming - and yeah it’s not easy, especially as you reach for higher levels of accuracy.

You need to put in significant amounts of effort just for less than a few % point increases in accuracy.

For my own tools which focus on speed (and used for finding frequently used words in large bodies of text) I ended up opting for a first longest match algorithm.

It has a relatively high error rate, but it’s acceptable if you’re only looking for the first few hundred frequently used words.

What segmented are you using, or have you developed your own?

routerl|1 year ago

Thanks for the kind words!

I'm using Jieba[0] because it hits a nice balance of fast and accurate. But I'm initializing it with a custom dictionary (~800k entries), and have added several layers of heuristic post-segmentation. For example, Jieba tends to split up chengyu into two words, but I've decided they should be displayed as a single word, since chengyu are typically a single entry in dictionaries.

[0] https://github.com/fxsjy/jieba

greyman|1 year ago

OP, thank you for your work, I will continue to watch it.

I tried to built something similar, but what I didn't discover and think is crucial is the proper FE: yes, word segmenting is useful, but if I have to click on each word to see its meaning, for how I learn Chinese by reading texts, I still find Zhongwen Chrome extension to be more useful, since I see English meaning quicker, just by hover cursor over the word.

In my project, I was trying to display English translation under each Chinese word, which would I think require AI to determine the correct translation, since one cannot just put CC-CEDIT entry there.

P.S: I dont know how you built your dictionary, it translated 气功师 as "Aerosolist", which I am not sure what is exactly, but this should be actually two words, not one - correct segmentation and translation is 气功 师, "qigong master".

routerl|1 year ago

Thanks for the kind words, and the bug report!

The (awful and incorrect) translation you've pointed out comes from the segmenter being too greedy, not finding the (non-existent) word in any dictionary, and therefore dispatching the word to be machine translated, without context. This is the final fallback in the segmentation pipeline, to avoid displaying nothing at all, and my priority right now is making the segmentation pipeline more robust so this rarely (or never) happens, since it sometimes produces hilariously bad results!

rahimnathwani|1 year ago

This is cool. If you haven't already, you might like to take a look at Du Chinese and The Chairman's Bao. They might provide ideas or inspiration.

Also the 'clip reader' feature in Pleco is decent.

Also, supporting simplified as well as traditional might increase your potential audience.

routerl|1 year ago

It supports traditional and simplified, as well as pinyin and bopomofo :)

It's already possible to switch instantly between pinyin and bopomofo, and I'm working on letting users switch between simplified/traditional, but this is also a non-trivial problem. For now, the app will follow the user's lead: if you enter traditional text, it will return traditional text, and same goes for simplified.

cannam|1 year ago

This was my attempt at doing something a little bit like it, 27 years ago. It's mostly interesting as a historical artifact - certainly yours is a lot more sophisticated and much much prettier! This one just does greedy matching against CEDICT.

https://all-day-breakfast.com/chinese/

What is kind of interesting is that the script itself (a single Perl CGI script) has survived the passage of time better than the text documenting it.

Besides all the broken links, the text refers throughout to Big-5 encoding, and the form at https://all-day-breakfast.com/chinese/big5-simple.html has a warning that the popups only work in Netscape or MSIE 4. You can now ignore all of that because browsers are more encoding aware (it still uses Big-5 internally but you can paste in Unicode) and the popups work anywhere.

ipnon|1 year ago

Awesome! Your Chinese must be pretty good now. Do you still have the Perl source? How did you even devise such a project in 1998?

thaumasiotes|1 year ago

> Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.

That's not true at all; you can go a long way just by clicking on characters in Pleco, and Pleco's segmentation algorithm is awful. (Specifically, it's greedy "find the longest substring starting at the selected character for which a dictionary entry exists".)

Sometimes I go back through very old conversations in Chinese and notice that I completely misunderstood something. That's an unfortunate but normal part of the language-learning process. You don't need full comprehension to learn. What would babies do?

wonnage|1 year ago

That's not terrible for a dictionary where you control what you're looking up. I found poor segmentation to be way more annoying when trying to do stuff like selecting words for highlighting on my Kindle - it invariably breaks up a chengyu or selects nonsensical groups of words. Same with the Mac 3d touch quick look.

carom|1 year ago

Did you find the library jieba? That is what I am using for segmentation. It seems to work fine on simplified despite not advertising it.

routerl|1 year ago

I did! Jieba is the first step in my segmentation pipeline. As far as I can tell, Jieba's default config tends to work better for simplified, but in my case the custom dictionary I feed it has significantly more traditional entries than simplified entries, especially for historical terms and slang.

mindvirus|1 year ago

This is great. I'd love it for flashcard creation - paste in a block of text I'm reading and extract vocabulary from it.

routerl|1 year ago

OP here, I'm adding a feature that will allow users to save specific words to lists, and export the lists in formats that can be imported to flashcard apps.

sarabande|1 year ago

纔 in this case should use the definition of 才 (cai2) not (shan1) which is extremely uncommon. Otherwise, cool app!

routerl|1 year ago

Could you post the text you used? This kind of thing goes straight into my unit tests.

I'm also working on showing all the pronunciations/definitions for a given hanzi, it should be ready later this week.

georgeplusplus|1 year ago

Have you used the app Pleco?

That app has been invaluable as someone learning Chinese.

that app breaks down mandarin sentences into individual characters. I believe it’s made by a Taiwanese developer too.

I tried your app with a few sentences and it works really well!

AlchemistCamp|1 year ago

I don't think it's a Taiwanese developer. If so, he got a lot of stroke orders wrong for traditional characters as per the MOE standards.

Anytime I need to look up a character or word, I go to https://www.moedict.tw/ first. Pleco is still great for having so many add-ons (including dictionaries) and, from what I've heard, some decent graded readers.

routerl|1 year ago

Thank you, and thanks for checking it out!

I use Pleco almost every day :)

bnly|1 year ago

Nicely done, this looks quite useful!

maxglute|1 year ago

Very well executed.