It's been noted that LLM's output quality decays as they ingest more LLM generated content in their training. Will the same happen for LLM generated code as more and more of the code on Github is generated by LLMs? What then?
The approach has changed. Its all about test / inference time now and reinforcement learning on top of the base models.
There is no end in sight anymore, training data won't be a limiting factor when reasoning and self-play are the approach
jdmoreira|1 year ago