Why wouldn't you want an LLM for a language learning tool? Language is one of things I would trust an LLM completely on. Have you ever seen ChatGPT make an English mistake?
Grammarly is all in on AI and recently started recommended splitting "wasn't" and added the contraction to the word it modified. Example: "truly wasn't" becomes "was trulyn't"
Hm ... I wonder, is Grammarly also responsible for the flood of contraction of lexical "have" the last few years? It's standard in British English, but outside of poetry it is proscribed in almost all other dialects (which only permit contraction of auxiliary "have").
Even in British I'm not sure how widely they actually use it - do they say "I've a car" and "I haven't a car"?
Yeah, I agree. An open-source LLM-based grammar checker with a user interface similar to Grammarly is probably what I'm looking for. It doesn't need to be perfect (none of the options are); it just needs to help me become a better writer by pointing out issues in my text. I can ignore the false positives, and as long as it helps improve my text, I don't mind if it doesn't catch every single issue.
Using an LLM would also help make it multilingual. Both Grammarly and Harper only support English and will likely never support more than a few dozen very popular languages. LLMs could help cover a much wider range of languages.
I tried to use one LLM based tool to rewrite sentence in more official corporate form, and it rewrote something like "we are having issues with xyz" into "please provide more information and I'll do my best to help".
LLMs are trained so hard to be helpful that it's really hard to contain them into other tasks
uh. yes? it's far from uncommon, and sometimes it's ludicrously wrong. Grammarly has been getting quite a lot of meme-content lately showing stuff like that.
it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.
Do you have any examples? The only time I noticed an LLM make a language mistake was when using a quantized model (gemma) with my native language (so much smaller training data pool).
healsdata|8 months ago
https://imgur.com/a/RQZ2wXA
o11c|8 months ago
Even in British I'm not sure how widely they actually use it - do they say "I've a car" and "I haven't a car"?
akdev1l|8 months ago
Destiner|8 months ago
Has to be a bug in their rule-based system?
marginalia_nu|8 months ago
InsideOutSanta|8 months ago
Using an LLM would also help make it multilingual. Both Grammarly and Harper only support English and will likely never support more than a few dozen very popular languages. LLMs could help cover a much wider range of languages.
Szpadel|8 months ago
LLMs are trained so hard to be helpful that it's really hard to contain them into other tasks
Groxx|8 months ago
it is of course mostly very good at it, but it's very far from "trustworthy", and it tends to mirror mistakes you make.
perching_aix|8 months ago
dartharva|8 months ago