The bigger issue is that LLMs haven’t had much training on Q as there’s little publically available code. I recently had to try and hack some together and LLMs couldn’t string simple pieces of code together.
Perhaps for array languages LLMs would do a better job running on a q/APL parse tree (produced using tree-sitter?) with the output compressed into the traditional array-language line noise just before display, outside the agentic workflow.
haolez|4 months ago
quotemstr|4 months ago