top | item 40045264

(no title)

fotcorn | 1 year ago

I wonder if they have a clear hardware separation between each of the API, ChatGPT, their lower-scale experiments and their large scale (e.g. GPT5) training hardware. Or is everything just a big blob of hardware, that dynamically gets allocated to jobs depending on demand?

Hardware demand is so high, having GPUs idling is a massive waste, but you also want to have separation between dev, test and prod environments, so not obvious what to do.

discuss

order

No comments yet.