top | item 43198203

(no title)

bguberfain | 1 year ago

Until GPT-4.5, GPT-4 32K was certainly the most heavy model available at OpenAI. I can imagine the dilemma between to keep it running or stop it to free GPU for training new models. This time, OpenAI was clear whether to continue serving it in the API long-term.

discuss

order

Chamix|1 year ago

It's interesting to compare the cost of that original gpt-4 32k(0314) vs gpt-4.5:

$60/M input tokens vs $75/M input tokens

$120/M output tokens vs $150/M output tokens

jsheard|1 year ago

> or stop it to free GPU for training new models.

Don't they use different hardware for inference and training? AIUI the former is usually done on cheaper GDDR cards and the latter is done on expensive HBM cards.