(no title)
exclipy | 7 months ago
I wonder if LLMs can use the type information more like a human with an IDE.
eg. It generates "(blah blah...); foo." and at that point it is constrained to only generate tokens corresponding to public members of foo's type.
Just like how current gen LLMs can reliably generate JSON that satisfies a schema, the next gen will be guaranteed to natively generate syntactically and type- correct code.
koolba|7 months ago
Just throw more GPUs at the problem and generate N responses in parallel and discard the ones that fail to match the required type signature. It’s like running a linter or type check step, but specific to that one line.
xwolfi|7 months ago
esafak|7 months ago
treyd|7 months ago
unknown|7 months ago
[deleted]