top | item 46113628

(no title)

deeptishukla22 | 3 months ago

What’s happening here feels less like “Chinese models gaining share” and more like a substrate shift driven by cost physics. When inference drops from dollars to cents and quality converges to GPT-4-mini territory, the default stack for early-stage teams flips almost overnight. At that point founders optimize for runway, not sentiment, and open models become the path of least resistance.

The more interesting consequence is that when inference and fine-tuning are essentially free at startup scale, specialization becomes viable again. Instead of generic prompting against a closed API, teams can afford narrow, high-precision models tailored to their domain — something that used to be economically out of reach. Came across this interesting post - https://www.linkedin.com/feed/update/urn:li:activity:7396291...

discuss

order

No comments yet.