(no title)
commandar | 2 months ago
>Input:
>$21.00 / 1M tokens
>Output:
>$168.00 / 1M tokens
That's the most "don't use this" pricing I've seen on a model.
commandar | 2 months ago
>Input:
>$21.00 / 1M tokens
>Output:
>$168.00 / 1M tokens
That's the most "don't use this" pricing I've seen on a model.
aimanbenbaha|2 months ago
General intelligence has ridiculously gotten less expensive. I don't know if it's because of compute and energy abundance,or attention mechanisms improving in efficiency or both but we have to acknowledge the bigger picture and relative prices.
commandar|2 months ago
Pro barely performs better than Thinking in OpenAI's published numbers, but comes at ~10x the price with an explicit disclaimer that it's slow on the order of minutes.
If the published performance numbers are accurate, it seems like it'd be incredibly difficult to justify the premium.
At least on the surface level, it looks like it exists mostly to juice benchmark claims.
asgraham|2 months ago
arthurcolle|2 months ago
wahnfrieden|2 months ago
reactordev|2 months ago
rvnx|2 months ago
Leynos|2 months ago
Makes me feel guilty for spamming pro with any random question I have multiple times a day.