top | item 43197977

(no title)

ekojs | 1 year ago

> Because of this, we’re evaluating whether to continue serving it in the API long-term as we balance supporting current capabilities with building future models.

Seems like it's not going to be deployed for long.

$75.00 / 1M tokens for input

$150.00 / 1M tokens for output

That's crazy prices.

discuss

order

bguberfain|1 year ago

Until GPT-4.5, GPT-4 32K was certainly the most heavy model available at OpenAI. I can imagine the dilemma between to keep it running or stop it to free GPU for training new models. This time, OpenAI was clear whether to continue serving it in the API long-term.

Chamix|1 year ago

It's interesting to compare the cost of that original gpt-4 32k(0314) vs gpt-4.5:

$60/M input tokens vs $75/M input tokens

$120/M output tokens vs $150/M output tokens

jsheard|1 year ago

> or stop it to free GPU for training new models.

Don't they use different hardware for inference and training? AIUI the former is usually done on cheaper GDDR cards and the latter is done on expensive HBM cards.

daemonologist|1 year ago

Imagine if they built a reasoning model with costs like these. Sometimes it seems like they're on a trajectory to create a model which is strictly more capable than I am but which costs 100x my salary to run.

jes5199|1 year ago

if you still get a moore’s law halving every couple years, it becomes competitive in, uh, about thirteen years?