top | item 30309534

Lingua-Go, the most accurate language detection for Go

112 points| acrophobic | 4 years ago |github.com

24 comments

order

tgv|4 years ago

I see Dutch performs badly. I wouldn't be surprised if that's because of bad/noisy training data. Dutch web content contains an awful amount of English, which pollutes recognition. Cross-check the Dutch tokens with an English dictionary to be sure (although there is quite some overlap for frequent words, e.g. "is", "we", "are", "have", "bent", "had", "brief", etc., and rare ones like "keeshond").

BTW, the test statistic for recognizing individual words isn't interesting, unless you sample/weigh by word frequency.

yorwba|4 years ago

My guess is that it has trouble distinguishing between Afrikaans and Dutch, Indonesian and Malay, and other similar pairs.

rippeltippel|4 years ago

> This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore.

Can this be a problem? If a text in Language_A includes names/words of Language_B, only relying on special characters would wrongly classify the entire text as Language_B.

kccqzy|4 years ago

What's a good way to detect languages in mixed-language passages? What's the state of art here?

For example, given "'I think, therefore I am' is the first principle of René Descartes's philosophy that was originally published in French as je pense, donc je suis.", is there a library that would tell me the main passage is in English, but contains fragments in French?

yorwba|4 years ago

With an ngram-based model like this one, you can just feed it short substrings, since it doesn't take the larger context into account anyway. There'll be some problems at the boundary, because e.g. "as" is a word in both languages.

koeng|4 years ago

I appreciate how direct and clear what this library does and who it is for. I have no need for it now, but after 1 paragraph of reading I’ll be remembering its name for later.

blurker|4 years ago

Yes that puts it really well and I agree. Too often I'll end up at a project where I'll still have little understanding of what it does or why it exists even after reading the readme. This project is super clear! But maybe that's what we should expect from a natural language library

wodenokoto|4 years ago

> Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough.

I only dabbled in language detection at a workshop at a conference years ago, but I was very impressed how well such models work on short text with only bigrams.

Maybe once you expand to over 70 languages does bi- and tri-grams fall short, but I just wanted to say that this is a usecase here very simple models can get you really far.

If you see a blog post where a language detection problem is solved with deep learning chances are the author doesn’t know what they are doing (towards datascience, I’m looking at you!) or it’s a tutorial for working with an NN framework.

Radim|4 years ago

My experience is the opposite: character ngram models work "OK" on academic tasks and clean corpora. Not so much when unleashed on real data.

By "real", I mean texts in a mix of multiple languages (super common on the web); short texts; texts in a different (unknown) language where ngrams don't know how to say "I don't know" and return rubbish instead; texts in close languages; etc.

Going "deep learning" is not the only alternative. Even simpler methods can work significantly better, while being fully interpretable:

https://link.springer.com/chapter/10.1007/978-3-642-00382-0_...

akie|4 years ago

We're using libraries like this to try to guess the language of a book based on title alone (in case no other information is readily available), and trigram-based algorithms get it wrong often enough for it to be noticeable. I will look into replacing our current library with this one, it seems better suited for the task at hand.

doubtfuluser|4 years ago

How does it actually compare to fasttext [1] in performance. Building an interface to that in GO shouldn’t be too complicated. The claim that all language identification (lid) relies on ngrams is bold and there has been a switch to pure neural network based approaches.

[1] https://fasttext.cc/docs/en/language-identification.html

pemistahl|4 years ago

Hello everyone,

I'm the author of Lingua. Thank you for sharing my work and making it known in the NLP world.

Apart from the Go implementation, I've implemented the library in Kotlin, Python and Rust. Just take a look at my profile if you are interested: https://github.com/pemistahl

debdut|4 years ago

I was searching for *(this but for programming languages and frameworks)