top | item 45935774

(no title)

didierbreedt | 3 months ago

I’m waiting for a llm focused language. We’re already seeing AI is better with strongly typed languages. If we think about how an agent can ensure correctness as instructed by a human, as the priority, things could get interesting. Question is, will humans actually be able to make sense of it? Do we need to?

discuss

order

suddenlybananas|3 months ago

How could an LLM learn a programming language sufficiently well unless there is already a large corpus of human-written examples of that language?

vbezhenar|3 months ago

I'm pretty sure, ChatGPT could write a program in any language, which is similar enough to existing languages. So you could start by translating existing programs.

nrhrjrjrjtntbt|3 months ago

LLM could generate such a corpus, right? With feedback mechanisms such as side by side tests.

morkalork|3 months ago

I've wondered about this too. What would a language look like if it were written with tokenization in mind, could you have a more dense and efficient form of encoding expressions? At the same time, the language could be more verbose and exacting because a human wouldn't bemoan reading or writing it.