top | item 46102742

(no title)

0xblacklight | 3 months ago

This is an excellent point - LLMs are autoregressive next-token predictors, and output token quality is a function of input token quality

Consider that if the only code you get out of the autoregressive token prediction machine is slop, that this indicates more about the quality of your code than the quality of the autoregressive token prediction machine

discuss

order

acedTrex|3 months ago

> that this indicates more about the quality of your code

Considering that the "input" to these models is essentially all public code in existence, the direct context input is a drop in the bucket.