top | item 44883022

(no title)

arkmm | 6 months ago

"There was one surprise when I revisited costs: OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model. Even conservatively assuming I had 1 billion crawled pages, each with 1K tokens (abnormally long), it would only cost $100 to generate embeddings for all of them. By comparison, running my own inference, even with cheap Runpod spot GPUs, would cost on the order of 100× more expensive, to say nothing of other APIs."

I wonder if OpenAI uses this as a honeypot to get domain-specific source data into its training corpus that it might otherwise not have access to.

discuss

order

magicalhippo|6 months ago

> OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model.

Is this the drug dealer scheme? Get you hooked later jack up prices? After all, the alternative would be regenerating all your embeddings no?

cedws|6 months ago

I don’t think OpenAI train on data processed via the API, unless there’s an exception specifically for this.

dpoloncsak|6 months ago

Maybe I misunderstand, but I'm pretty sure they offer an option for cheaper API costs (or maybe its credits?) if you allow them to train on your API requests.

To your point, pretty sure it's off by default, though

Edit: From https://platform.openai.com/settings/organization/data-contr...

Share inputs and outputs with OpenAI

"Turn on sharing with OpenAI for inputs and outputs from your organization to help us develop and improve our services, including for improving and training our models. Only traffic sent after turning this setting on will be shared. You can change your settings at any time to disable sharing inputs and outputs."

And I am 'enrolled for complimentary daily tokens.'

trhway|6 months ago

i'd not rule out some approach like instead of training directly on the data, may be they would train on a very high dimensional embedding of such a data (or some other similarly "anonymized", yet still very semantically rich representation of the data)

dannyw|6 months ago

Can you truly trust them though?

anothernewdude|6 months ago

It'd be a way to put crap or poisoned data into their training data if that is the case. I wouldn't.