top | item 40646003

(no title)

wtbdqrs | 1 year ago

isn't any instruction a subclass of inference? and doesn't any phrasing (lexicology) simply translate "down" to the heaviest values which, varying with the fine tuning, are the words that are, consensually and conventionally, the simplest ones that convey the meaning of the original word in the prompt, which should be the least ambivalent/least interpretable (again, fine tuning can broaden the scope) oneS. thus the LLM fulfills the "translated" instructions step by step and comes up with both or either the correct reasoning and answer.

details and technicalities, especially liminal ones, aren't as conventional and consensual as the name of the current set it is to be interpreted in.

so almost all mistakes of LLMs can be blamed on the lack of variety of human translations. multiple translations are only common for subtitles, mangas and manhwa as far as i know, or when some dude or dudette is proficient and passionate in two languages and reads a bad/weak translation of a (usually classic) novel. why the fuck would a human properly retranslate automated documentations or googles dev blog? or books on logic, in any science, books on art and aesthetics and whatnot. technical people don't need to care because, practically, there are no interpretations in algorithms and the rest of the code, except when a programming language does something weird on the (or someones) machine, which isn't that common by design.

discuss

order

No comments yet.