top | item 45640298

(no title)

benjaminwootton | 4 months ago

The bigger issue is that LLMs haven’t had much training on Q as there’s little publically available code. I recently had to try and hack some together and LLMs couldn’t string simple pieces of code together.

It’s a bizarre language.

discuss

order

haolez|4 months ago

I don't think that's the biggest problem. I think it's the tokenizer: it probably does a poor job with array languages.

quotemstr|4 months ago

Perhaps for array languages LLMs would do a better job running on a q/APL parse tree (produced using tree-sitter?) with the output compressed into the traditional array-language line noise just before display, outside the agentic workflow.