top | item 47104231

(no title)

theowaway213456 | 8 days ago

> Unless you think they secretly run it which is a conspiracy

tbh this doesn't sound like a conspiracy to me at all. There's no reason why they couldn't have an internal subsystem in their product which detects math problems and hands off the token generation to an intermediate, more optimized Rust program or something, which does math on the cheap instead of burning massive amounts of GPU resources. This would just be a basic cost optimization that would make their models both more effective and cheaper. And there's no reason why they would need to document this in their API docs, because they don't document any other internal details of the model.

I'm not saying they actually do this, but I think it's totally reasonable to think that they would, and it would not surprise me at all if they did.

Let's not get hung up on the "conspiracy" thing though - the whole point is that these models are closed source and therefore we don't know what we are actually testing when we run these "experiments". It could be a pure LLM or it could be a hybrid LLM + classical reasoning system. We don't know.

discuss

order

simianwords|8 days ago

They say “they don’t support code interpreter”.

floam|8 days ago

“Code interpreter” is a product feature the customer can use that isn’t being discussed.

They can obviously support it internally, and the feature exists for ChatGPT, but they’re choosing not to expose that combo in the API yet because of product rollout constraints.