(no title)
f38zf5vdt | 8 months ago
The only thing that actually worked was knowing the target language and sitting down with multiple LLMs, going through the translation one sentence at a time with a translation memory tool wired in.
The LLMs are good, but they make lot of strange mistakes a human never would. Weird grammatical adherence to English structures, false friend mistakes that no one bilingual would make, and so on. Bizarrely many of these would not be caught between LLMs -- sometimes I would get _increasingly_ unnatural outputs instead of more natural outputs.
This is not just for English to Asian languages, even English to German or French... I shipped something to a German editor and he rewrote 50% of the lines.
LLMs are good editors and suggestors for alternatives, but I've found that if you can't actually read your target language to some degree, you're lost in the woods.
crazygringo|8 months ago
I have been astounded at the sophistication of LLM translation, and haven't encountered a single false-friend example ever. Maybe it depends a lot on which models you're using? Or it thinks you're trying to have a conversation that code-switches mid-sentence, which is a thing LLM's can do if you want?
f38zf5vdt|8 months ago