top | item 45049284

(no title)

querez | 6 months ago

"developers can prototype, fine-tune, and inference [AI models]"...

shouldn't it be infer?

discuss

order

myrmidon|6 months ago

It should be "run inference on" in my opinion, and would be best shortened IMO to just "prototype, fine-tune, and run".

I'argue that "inference" has taken on a somewhat distinct new meaning in an LLM-context (loosely: running actual tokens through the model) and deviating from the base term to the verb form would make the sentence less clear to me.

killerstorm|6 months ago

No. It's quite common for technical slang to deviate from general vocabulary.

Cf. "compute" is a verb for normal people, but for techies it is also "hardware resources used to compute things".

querez|6 months ago

I don't think "inference" as a verb has become technical slang. At least not in my bubble.

globular-toast|6 months ago

Perhaps "infer from"? I was also taken aback by how they just decided to make "inference" a verb, though. A decent writer would have rewritten the sentence to make it work, similar to how a software implementation sometimes just doesn't work out. But apparently that's too much to ask from Nvidia marketing.

Funnily enough things like this show that a human probably was involved in the writing. I doubt an LLM would have produce that. I've often thought about how future generations are going to signal that they are human and maybe the way will be human language changing much more rapidly than it has done, maybe even mid sentence.

nharada|6 months ago

I don’t think either of those are right…