top | item 46607257

(no title)

sbszllr | 1 month ago

Interestingly enough, it is possible to do private inference in theory, e.g. via oblivious inference protocols but prohibitively slow in practice. You can also throw a model into a trusted execution environment. But again, too slow.

discuss

order

ramoz|1 month ago

Modern TEE is actually performant for industry needs these days. Over 400,000x gains of zero knowledge proofs and with nominal differences from most raw inference workloads.

sbszllr|1 month ago

I agree that is performant enough for many applications, I work in the field. But it isn't performant enough to run large scale LLM inference with reasonable latency. Especially not when we compare the throughput numbers for a single-tenant inference inside a TEE vs batched non-private inference.