I see Dutch performs badly. I wouldn't be surprised if that's because of bad/noisy training data. Dutch web content contains an awful amount of English, which pollutes recognition. Cross-check the Dutch tokens with an English dictionary to be sure (although there is quite some overlap for frequent words, e.g. "is", "we", "are", "have", "bent", "had", "brief", etc., and rare ones like "keeshond").
BTW, the test statistic for recognizing individual words isn't interesting, unless you sample/weigh by word frequency.
> This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore.
Can this be a problem? If a text in Language_A includes names/words of Language_B, only relying on special characters would wrongly classify the entire text as Language_B.
What's a good way to detect languages in mixed-language passages? What's the state of art here?
For example, given "'I think, therefore I am' is the first principle of René Descartes's philosophy that was originally published in French as je pense, donc je suis.", is there a library that would tell me the main passage is in English, but contains fragments in French?
With an ngram-based model like this one, you can just feed it short substrings, since it doesn't take the larger context into account anyway. There'll be some problems at the boundary, because e.g. "as" is a word in both languages.
I appreciate how direct and clear what this library does and who it is for. I have no need for it now, but after 1 paragraph of reading I’ll be remembering its name for later.
Yes that puts it really well and I agree. Too often I'll end up at a project where I'll still have little understanding of what it does or why it exists even after reading the readme. This project is super clear! But maybe that's what we should expect from a natural language library
> Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough.
I only dabbled in language detection at a workshop at a conference years ago, but I was very impressed how well such models work on short text with only bigrams.
Maybe once you expand to over 70 languages does bi- and tri-grams fall short, but I just wanted to say that this is a usecase here very simple models can get you really far.
If you see a blog post where a language detection problem is solved with deep learning chances are the author doesn’t know what they are doing (towards datascience, I’m looking at you!) or it’s a tutorial for working with an NN framework.
My experience is the opposite: character ngram models work "OK" on academic tasks and clean corpora. Not so much when unleashed on real data.
By "real", I mean texts in a mix of multiple languages (super common on the web); short texts; texts in a different (unknown) language where ngrams don't know how to say "I don't know" and return rubbish instead; texts in close languages; etc.
Going "deep learning" is not the only alternative. Even simpler methods can work significantly better, while being fully interpretable:
We're using libraries like this to try to guess the language of a book based on title alone (in case no other information is readily available), and trigram-based algorithms get it wrong often enough for it to be noticeable. I will look into replacing our current library with this one, it seems better suited for the task at hand.
How does it actually compare to fasttext [1] in performance.
Building an interface to that in GO shouldn’t be too complicated.
The claim that all language identification (lid) relies on ngrams is bold and there has been a switch to pure neural network based approaches.
I'm the author of Lingua. Thank you for sharing my work and making it known in the NLP world.
Apart from the Go implementation, I've implemented the library in Kotlin, Python and Rust. Just take a look at my profile if you are interested: https://github.com/pemistahl
In general, language detection is surprisingly hard. There is LSTM-based implementation https://github.com/AU-DIS/LSTM_langid which should be better than ngrams.
tgv|4 years ago
BTW, the test statistic for recognizing individual words isn't interesting, unless you sample/weigh by word frequency.
yorwba|4 years ago
rippeltippel|4 years ago
Can this be a problem? If a text in Language_A includes names/words of Language_B, only relying on special characters would wrongly classify the entire text as Language_B.
unknown|4 years ago
[deleted]
kccqzy|4 years ago
For example, given "'I think, therefore I am' is the first principle of René Descartes's philosophy that was originally published in French as je pense, donc je suis.", is there a library that would tell me the main passage is in English, but contains fragments in French?
hvdijk|4 years ago
yorwba|4 years ago
koeng|4 years ago
blurker|4 years ago
wodenokoto|4 years ago
I only dabbled in language detection at a workshop at a conference years ago, but I was very impressed how well such models work on short text with only bigrams.
Maybe once you expand to over 70 languages does bi- and tri-grams fall short, but I just wanted to say that this is a usecase here very simple models can get you really far.
If you see a blog post where a language detection problem is solved with deep learning chances are the author doesn’t know what they are doing (towards datascience, I’m looking at you!) or it’s a tutorial for working with an NN framework.
Radim|4 years ago
By "real", I mean texts in a mix of multiple languages (super common on the web); short texts; texts in a different (unknown) language where ngrams don't know how to say "I don't know" and return rubbish instead; texts in close languages; etc.
Going "deep learning" is not the only alternative. Even simpler methods can work significantly better, while being fully interpretable:
https://link.springer.com/chapter/10.1007/978-3-642-00382-0_...
akie|4 years ago
doubtfuluser|4 years ago
[1] https://fasttext.cc/docs/en/language-identification.html
alexott|4 years ago
I’ll try to find time to do it myself, but most probably only tomorrow
pemistahl|4 years ago
Xeoncross|4 years ago
I see https://github.com/google/cld3, but how does this compare with https://github.com/CLD2Owners/cld2 which is used by the large https://commoncrawl.org project to classify billions of samples from the whole internet?
akreal|4 years ago
https://github.com/pemistahl/lingua-py#4-how-good-is-it
CDL 2 seems to be slightly less accurate than CLD 3 on average.
pemistahl|4 years ago
I'm the author of Lingua. Thank you for sharing my work and making it known in the NLP world.
Apart from the Go implementation, I've implemented the library in Kotlin, Python and Rust. Just take a look at my profile if you are interested: https://github.com/pemistahl
nshm|4 years ago
debdut|4 years ago