The silent victory here is this seems like it is being built to be faster and cheaper than o3 while presenting a reasonable jump, which is an important jump in scaling law
On the other hand if it's just getting bigger and slower it's not a good sign for LLMs
Yeah, this very much feels like "we have made a more efficient/scalable model and we're selling it as the new shiny but it's really just an internal optimization to reduce cost"
smlacy|6 months ago
reasonableklout|6 months ago
Not sure why a more efficient/scalable model isn't exciting
hirvi74|6 months ago
onlyrealcuzzo|6 months ago