top | item 46681616

(no title)

Workaccount2 | 1 month ago

Unless one of the open model labs has a breakthrough, they will always lag. Their main trick is distilling the SOTA models.

People talk about these models like they are "catching up", they don't see that they are just trailers hooked up to a truck, pulling them along.

discuss

order

runako|1 month ago

FWIW this is what Linux and the early open-source databases (e.g. PostgreSQL and MySQL) did.

They usually lagged for large sets of users: Linux was not as advanced as Solaris, PostgreSQL lacked important features contained in Oracle. The practical effect of this is that it puts the proprietary implementation on a treadmill of improvement where there are two likely outcomes: 1) the rate of improvement slows enough to let the OSS catch up or 2) improvement continues, but smaller subsets of people need the further improvements so the OSS becomes "good enough." (This is similar to how most people now do not pay attention to CPU speeds because they got "fast enough" for most people well over a decade ago.)

weslleyskah|1 month ago

You know, this is also the case of Proxmox vs. VMWare.

Proxmox became good and reliable enough as an open-source alternative for server management. Especially for the Linux enthusiasts out there.

irthomasthomas|1 month ago

Deepseek 3.2 scores gold at IMO and others. Google had to use parallel reasoning to do that with gemini, and the public version still only achieves silver.

skrebbel|1 month ago

How does this work? Do they buy lots of openai credits and then hit their api billions of times and somehow try to train on the results?

g-mork|1 month ago

dont forget the plethora of middleman chat services with liberal logging policies. i've no doubt there is a whole subindustry lurking in here