> They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
rybosworld|19 days ago
They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
lelanthran|19 days ago
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
eqvinox|19 days ago
But what asymptote are they approaching? Average code? Good code? Great code?
jmalicki|19 days ago
jmalicki|19 days ago
co_king_3|19 days ago
Let me repeat myself.
I think it goes without saying that they will be writing "good code" in short time.