So that you can be using the current frontier model for the next 8 months instead of twiddling your thumbs waiting for the next one to come out?
I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.
So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.
You also don't have to throw away everything you've learnt in those 8 months, there's some things that you'll subtly pickup that you can carry over into the next generation as well.
It's not like you need to take a course. The frontier models are the best, just using them and their harnesses and figuring out what works for your use case is the 'investing in learning'.
There's not that much learning involved. Modern SOTA models are much more intelligent than what they used to be not long ago. It's quite scary/amazing.
jonas21|21 days ago
I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.
So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.
properbrew|21 days ago
fusslo|21 days ago
senko|21 days ago
But if you do want to use LLMs for coding now, not using the best models just doesn't make sense.
ej88|21 days ago
unknown|21 days ago
[deleted]
recursive|21 days ago
RGamma|20 days ago